WorldWideScience

Sample records for auditory hierarchical stimuli

  1. Expectation and Attention in Hierarchical Auditory Prediction

    Science.gov (United States)

    Noreika, Valdas; Gueorguiev, David; Blenkmann, Alejandro; Kochen, Silvia; Ibáñez, Agustín; Owen, Adrian M.; Bekinschtein, Tristan A.

    2013-01-01

    Hierarchical predictive coding suggests that attention in humans emerges from increased precision in probabilistic inference, whereas expectation biases attention in favor of contextually anticipated stimuli. We test these notions within auditory perception by independently manipulating top-down expectation and attentional precision alongside bottom-up stimulus predictability. Our findings support an integrative interpretation of commonly observed electrophysiological signatures of neurodynamics, namely mismatch negativity (MMN), P300, and contingent negative variation (CNV), as manifestations along successive levels of predictive complexity. Early first-level processing indexed by the MMN was sensitive to stimulus predictability: here, attentional precision enhanced early responses, but explicit top-down expectation diminished it. This pattern was in contrast to later, second-level processing indexed by the P300: although sensitive to the degree of predictability, responses at this level were contingent on attentional engagement and in fact sharpened by top-down expectation. At the highest level, the drift of the CNV was a fine-grained marker of top-down expectation itself. Source reconstruction of high-density EEG, supported by intracranial recordings, implicated temporal and frontal regions differentially active at early and late levels. The cortical generators of the CNV suggested that it might be involved in facilitating the consolidation of context-salient stimuli into conscious perception. These results provide convergent empirical support to promising recent accounts of attention and expectation in predictive coding. PMID:23825422

  2. Auditory attention to frequency and time: an analogy to visual local–global stimuli

    OpenAIRE

    Justus, Timothy; List, Alexandra

    2005-01-01

    Two priming experiments demonstrated exogenous attentional persistence to the fundamental auditory dimensions of frequency (Experiment 1) and time (Experiment 2). In a divided-attention task, participants responded to an independent dimension, the identification of three-tone sequence patterns, for both prime and probe stimuli. The stimuli were specifically designed to parallel the local–global hierarchical letter stimuli of [Navon D. (1977). Forest before trees: The precedence of global feat...

  3. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. PMID:26302716

  4. Auditory ERP response to successive stimuli in infancy

    OpenAIRE

    Chen, Ao; Peter, Varghese; Burnham, Denis

    2016-01-01

    Background. Auditory Event-Related Potentials (ERPs) are useful for understanding early auditory development among infants, as it allows the collection of a relatively large amount of data in a short time. So far, studies that have investigated development in auditory ERPs in infancy have mainly used single sounds as stimuli. Yet in real life, infants must decode successive rather than single acoustic events. In the present study, we tested 4-, 8-, and 12-month-old infants’ auditory ERPs to m...

  5. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    J. Degner

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  6. Auditory Long Latency Responses to Tonal and Speech Stimuli

    Science.gov (United States)

    Swink, Shannon; Stuart, Andrew

    2012-01-01

    Purpose: The effects of type of stimuli (i.e., nonspeech vs. speech), speech (i.e., natural vs. synthetic), gender of speaker and listener, speaker (i.e., self vs. other), and frequency alteration in self-produced speech on the late auditory cortical evoked potential were examined. Method: Young adult men (n = 15) and women (n = 15), all with…

  7. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PMID:24098923

  8. 40 Hz auditory steady state response to linguistic features of stimuli during auditory hallucinations.

    Science.gov (United States)

    Ying, Jun; Yan, Zheng; Gao, Xiao-rong

    2013-10-01

    The auditory steady state response (ASSR) may reflect activity from different regions of the brain, depending on the modulation frequency used. In general, responses induced by low rates (≤40 Hz) emanate mostly from central structures of the brain, and responses from high rates (≥80 Hz) emanate mostly from the peripheral auditory nerve or brainstem structures. Besides, it was reported that the gamma band ASSR (30-90 Hz) played an important role in working memory, speech understanding and recognition. This paper investigated the 40 Hz ASSR evoked by modulated speech and reversed speech. The speech was Chinese phrase voice, and the noise-like reversed speech was obtained by temporally reversing the speech. Both auditory stimuli were modulated with a frequency of 40 Hz. Ten healthy subjects and 5 patients with hallucination symptom participated in the experiment. Results showed reduction in left auditory cortex response when healthy subjects listened to the reversed speech compared with the speech. In contrast, when the patients who experienced auditory hallucinations listened to the reversed speech, the auditory cortex of left hemispheric responded more actively. The ASSR results were consistent with the behavior results of patients. Therefore, the gamma band ASSR is expected to be helpful for rapid and objective diagnosis of hallucination in clinic. PMID:24142731

  9. Auditory ERP response to successive stimuli in infancy.

    Science.gov (United States)

    Chen, Ao; Peter, Varghese; Burnham, Denis

    2016-01-01

    Background. Auditory Event-Related Potentials (ERPs) are useful for understanding early auditory development among infants, as it allows the collection of a relatively large amount of data in a short time. So far, studies that have investigated development in auditory ERPs in infancy have mainly used single sounds as stimuli. Yet in real life, infants must decode successive rather than single acoustic events. In the present study, we tested 4-, 8-, and 12-month-old infants' auditory ERPs to musical melodies comprising three piano notes, and examined ERPs to each individual note in the melody. Methods. Infants were presented with 360 repetitions of a three-note melody while EEG was recorded from 128 channels on the scalp through a Geodesic Sensor Net. For each infant, both latency and amplitude of auditory components P1 and N2 were measured from averaged ERPs for each individual note. Results. Analysis was restricted to response collected at frontal central site. For all three notes, there was an overall reduction in latency for both P1 and N2 over age. For P1, latency reduction was significant from 4 to 8 months, but not from 8 to 12 months. N2 latency, on the other hand, decreased significantly from 4 to 8 to 12 months. With regard to amplitude, no significant change was found for either P1 or N2. Nevertheless, the waveforms of the three age groups were qualitatively different: for the 4-month-olds, the P1-N2 deflection was attenuated for the second and the third notes; for the 8-month-olds, such attenuation was observed only for the middle note; for the 12-month-olds, the P1 and N2 peaks show relatively equivalent amplitude and peak width across all three notes. Conclusion. Our findings indicate that the infant brain is able to register successive acoustic events in a stream, and ERPs become better time-locked to each composite event over age. Younger infants may have difficulties in responding to late occurring events in a stream, and the onset response to the

  10. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  11. The cat's meow: A high-field fMRI assessment of cortical activity in response to vocalizations and complex auditory stimuli.

    Science.gov (United States)

    Hall, Amee J; Butler, Blake E; Lomber, Stephen G

    2016-02-15

    Sensory systems are typically constructed in a hierarchical fashion such that lower level subcortical and cortical areas process basic stimulus features, while higher level areas reassemble these features into object-level representations. A number of anatomical pathway tracing studies have suggested that the auditory cortical hierarchy of the cat extends from a core region, consisting of the primary auditory cortex (A1) and the anterior auditory field (AAF), to higher level auditory fields that are located ventrally. Unfortunately, limitations on electrophysiological examination of these higher level fields have resulted in an incomplete understanding of the functional organization of the auditory cortex. Thus, the current study uses functional MRI in conjunction with a variety of simple and complex auditory stimuli to provide the first comprehensive examination of function across the entire cortical hierarchy. Auditory cortex function is shown to be largely lateralized to the left hemisphere, and is concentrated bilaterally in fields surrounding the posterior ectosylvian sulcus. The use of narrowband noise stimuli enables the visualization of tonotopic gradients in the posterior auditory field (PAF) and ventral posterior auditory field (VPAF) that have previously been unverifiable using fMRI and pure tones. Furthermore, auditory fields that are inaccessible to more invasive techniques, such as the insular (IN) and temporal (T) cortices, are shown to be selectively responsive to vocalizations. Collectively, these data provide a much needed functional correlate for anatomical examinations of the hierarchy of cortical structures within the cat auditory cortex. PMID:26658927

  12. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  13. The P3 produced by auditory stimuli presented in a passive and active condition: modulation by visual stimuli.

    Science.gov (United States)

    Wronka, Eligiusz; Kuniecki, Michał; Kaiser, Jan; Coenen, Anton M L

    2007-01-01

    The aim of this study was to investigate how the processing of auditory stimuli is affected by the simultaneous presentation of visual stimuli. This was approached in an active and passive condition, during which a P3 was elicited in the human EEG by single auditory stimuli. Subjects were presented tones, either alone or accompanied by the simultaneous exposition of pictures. There were two different sessions. In the first, the presented tones demanded no further cognitive activity from the subjects (passive or 'ignore' session), while in the second session subjects were instructed to count the tones (active or 'count' session). The central question was whether inter-modal influences of visual stimulation in the active condition would modulate the auditory P3 in the same way as in the passive condition. Brain responses in the ignore session revealed only a small P3-like component over the parietal and frontal cortex, however, when the auditory stimuli co-occurred with the visual stimuli, an increased frontal activity in the window of 300-500 ms was observed. This could be interpreted as the reflection of a more intensive involuntary attention shift, provoked by the preceding visual stimulation. Moreover, it was found that cognitive load caused by the count instruction, resulted in an evident P3, with maximal amplitude over parietal locations. This effect was smaller when auditory stimuli were presented on the visual background. These findings might support the thesis that available resources were assigned to the analysis of visual stimulus, and thus were not available to analyze the subsequent auditory stimuli. This reduction in allocation of resources for attention was restricted to the active condition only, when the matching of a template with incoming information results in a distinct P3 component. It is discussed whether the putative source of this effect is a change in the activity of the frontal cortex. PMID:17691223

  14. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120

  15. Long-latency auditory evoked potentials with verbal and nonverbal stimuli,

    OpenAIRE

    Sheila Jacques Oppitz; Dayane Domeneghini Didoné; Débora Durigon da Silva; Marjana Gois; Jordana Folgearini; Geise Corrêa Ferreira; Michele Vargas Garcia

    2015-01-01

    ABSTRACT INTRODUCTION: Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. OBJECTIVE: To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and...

  16. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James;

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  17. Visual and auditory stimuli associated with swallowing. An fMRI study

    International Nuclear Information System (INIS)

    We focused on brain areas activated by audiovisual stimuli related to swallowing motions. In this study, three kinds of stimuli related to human swallowing movement (auditory stimuli alone, visual stimuli alone, or audiovisual stimuli) were presented to the subjects, and activated brain areas were measured using functional MRI (fMRI) and analyzed. When auditory stimuli alone were presented, the supplementary motor area was activated. When visual stimuli alone were presented, the premotor and primary motor areas of the left and right hemispheres and prefrontal area of the left hemisphere were activated. When audiovisual stimuli were presented, the prefrontal and premotor areas of the left and right hemispheres were activated. Activation of Broca's area, which would have been characteristic of mirror neuron system activation on presentation of motion images, was not observed; however, activation of brain areas related to swallowing motion programming and performance was verified for auditory, visual and audiovisual stimuli related to swallowing motion. These results suggest that audiovisual stimuli related to swallowing motion could be applied to the treatment of patients with dysphagia. (author)

  18. Cerebral processing of auditory stimuli in patients with irritable bowel syndrome

    Institute of Scientific and Technical Information of China (English)

    Viola Andresen; Peter Kobelt; Claus Zimmer; Bertram Wiedenmann; Burghard F Klapp; Hubert Monnikes; Alexander Poellinger; Chedwa Tsrouya; Dominik Bach; Albrecht Stroh; Annette Foerschler; Petra Georgiewa; Marco Schmidtmann; Ivo R van der Voort

    2006-01-01

    AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms,we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality.METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans.Statistical maps were constructed using the general linear model.RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds.To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations.CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affectire reactivity, possibly associated with the psychological comorbidity often found in IBS patients.

  19. A Systematic Desensitization Paradigm to Treat Hypersensitivity to Auditory Stimuli in Children with Autism in Family Contexts

    Science.gov (United States)

    Koegel, Robert L.; Openden, Daniel; Koegel, Lynn Kern

    2004-01-01

    Many children with autism display reactions to auditory stimuli that seem as if the stimuli were painful or otherwise extremely aversive. This article describes, within the contexts of three experimental designs, how procedures of systematic desensitization can be used to treat hypersensitivity to auditory stimuli in three young children with…

  20. Effects of passive tactile and auditory stimuli on left visual neglect.

    Science.gov (United States)

    Hommel, M; Peres, B; Pollak, P; Memin, B; Besson, G; Gaio, J M; Perret, J

    1990-05-01

    Patients with left-sided visual neglect fail to copy the left part of drawings or the drawings on the left side of a sheet of paper. Our aim was to study the variations in copying drawings induced by passive stimulation in patients with left-sided visual neglect. No stimulation at all, tactile unilateral and bilateral, binaural auditory verbal, and nonverbal stimuli were randomly applied to 14 patients with right-hemisphere strokes. Only nonverbal stimuli decreased the neglect. As nonverbal stimuli mainly activate the right hemisphere, the decrease in neglect suggests right-hemispheric hypoactivity at rest in these patients. The absence of modification of neglect during verbal stimulation suggests a bilateral hemispheric activation and the persistence of interhemispheric imbalance. Our results showed that auditory pathways take part in the network involved with neglect. Passive nonverbal auditory stimuli may be of interest in the rehabilitation of patients with left visual neglect. PMID:2334306

  1. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    Science.gov (United States)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  2. Source analysis of bimodal event-related potentials with auditory-visual stimuli

    OpenAIRE

    Cui, H; Xie, X.; Yan, H; Feng, L; Xu, S; Hu, Y.

    2013-01-01

    Dipole source analysis is applied to model brain generators of surface-recorded evoked potentials, epileptiform activity, and event-related potentials (ERP). The aim of this study was to explore brain activity of interaction between bimodal sensory cognition. Seven healthy volunteers were recruited in the study and ERP to these stimuli were recorded by 64 electrodes EEG recording system. Subjects were exposed to either the auditory and the visual stimulus alone or the combined auditory-visual...

  3. Hierarchical emergence of sequence sensitivity in the songbird auditory forebrain.

    Science.gov (United States)

    Ono, Satoko; Okanoya, Kazuo; Seki, Yoshimasa

    2016-03-01

    Bengalese finches (Lonchura striata var. domestica) generate more complex sequences in their songs than zebra finches. Because of this, we chose this species to explore the signal processing of sound sequence in the primary auditory forebrain area, field L, and in a secondary area, the caudomedial nidopallium (NCM). We simultaneously recorded activity from multiple single units in urethane-anesthetized birds. We successfully replicated the results of a previous study in awake zebra finches examining stimulus-specific habituation of NCM neurons to conspecific songs. Then, we used an oddball paradigm and compared the neural response to deviant sounds that were presented infrequently, with the response to standard sounds, which were presented frequently. In a single sound oddball task, two different song elements were assigned for the deviant and standard sounds. The response bias to deviant elements was larger in NCM than in field L. In a triplet sequence oddball task, two triplet sequences containing elements ABC and ACB were assigned as the deviant and standard. Only neurons in NCM that displayed broad-shaped spike waveforms had sensitivity to the difference in element order. Our results suggest the hierarchical processing of complex sound sequences in the songbird auditory forebrain. PMID:26864094

  4. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  5. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not...... result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was...... not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal...

  6. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    Science.gov (United States)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  7. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  8. Long-latency auditory evoked potentials with verbal and nonverbal stimuli,

    Directory of Open Access Journals (Sweden)

    Sheila Jacques Oppitz

    2015-12-01

    Full Text Available ABSTRACT INTRODUCTION: Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. OBJECTIVE: To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and absence of these data. METHODS: A total of 30 subjects with normal hearing were assessed, aged 18-32 years old, matched by gender. Nonverbal stimuli were used (tone burst; 1000 Hz - frequent and 4000 Hz - rare; and verbal (/ba/ - frequent; /ga/, /da/, and /di/ - rare. RESULTS: Considering the component N2 for tone burst, the lowest latency found was 217.45 ms for the BA/DI stimulus; the highest latency found was 256.5 ms. For the P3 component, the shortest latency with tone burst stimuli was 298.7 with BA/GA stimuli, the highest, was 340 ms. For the P3 amplitude, there was no statistically significant difference among the different stimuli. For latencies of components P1, N1, P2, N2, P3, there were no statistical differences among them, regardless of the stimuli used. CONCLUSION: There was a difference in the latency of potentials N2 and P3 among the stimuli employed but no difference was observed for the P3 amplitude.

  9. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.

    1998-11-17

    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

  10. An Evaluation of a Stimulus Preference Assessment of Auditory Stimuli for Adolescents with Developmental Disabilities

    Science.gov (United States)

    Horrocks, Erin; Higbee, Thomas S.

    2008-01-01

    Previous researchers have used stimulus preference assessment (SPA) methods to identify salient reinforcers for individuals with developmental disabilities including tangible, leisure, edible and olfactory stimuli. In the present study, SPA procedures were used to identify potential auditory reinforcers and determine the reinforcement value of…

  11. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  12. SPET monitoring of perfusion changes in auditory cortex following mono- and multi-frequency stimuli

    International Nuclear Information System (INIS)

    In order to assess the relationship between auditory cortex perfusion and the frequency of acoustic stimuli, twenty normally-hearing subjects underwent cerebral SPET. In 10 patients a multi-frequency stimulus (250-4000 Hz at 40 dB SL) was delivered, while 10 subjects were stimulated with a 500 Hz pure tone at 40 dB SL. The prestimulation SPET was subtracted from poststimulation study and auditory cortex activation was expressed as percent increments. Contralateral cortex was the most active area with multifrequency and monofrequency stimuli as well. A clear demonstration of a tonotopic distribution of acoustic stimuli in the auditory cortex was achieved. In addition, the accessory role played by homolateral accoustic areas was confirmed. The results of the present research support the hypothesis that brain SPET may be useful to obtain semiquantitative reliable information on low frequency auditory level in profoundly deaf patients. This may be achieved comparing the extension of the cortical areas activated by high-intensity multifrequency stimuli. (orig.)

  13. Category Variability Effect in Category Learning with Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Lee-Xieng eYang

    2014-10-01

    Full Text Available The category variability effect refers to that people tend to classify the midpoint item between two categories as the category more variable. This effect is regarded as evidence against the exemplar model, such as GCM (Generalized Context Model and favoring the rule model, such as GRT (i.e., the decision bound model. Although this effect has been found in conceptual category learning, it is not often observed in perceptual category learning. To figure out why the category variability effect is seldom reported in the past studies, we propose two hypotheses. First, due to sequence effect, the midpoint item would be classified as different categories, when following different items. When we combine these inconsistent responses for the midpoint item, no category variability effect occurs. Second, instead of the combination of sequence effect in different categorization conditions, the combination of different categorization strategies conceals the category variability effect. One experiment is conducted with single tones of different frequencies as stimuli. The collected data reveal sequence effect. However, the modeling results with the MAC model and the decision bound model support that the existence of individual differences is the reason for why no category variability effect occurs. Three groups are identified by their categorization strategy. Group 1 is rule user, placing the category boundary close to the low-variability category, hence inducing category variability effect. Group 2 takes the MAC strategy and classifies the midpoint item as different categories, depending on its preceding item. Group 3 classifies the midpoint item as the low-variability category, which is consistent with the prediction of the decision bound model as well as GCM. Nonetheless, our conclusion is that category variability effect can be found in perceptual category learning, but might be concealed by the averaged data.

  14. Effects of visual working memory on brain information processing of irrelevant auditory stimuli.

    Directory of Open Access Journals (Sweden)

    Jiagui Qu

    Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.

  15. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  16. Bio-inspired fabrication of stimuli-responsive photonic crystals with hierarchical structures and their applications

    Science.gov (United States)

    Lu, Tao; Peng, Wenhong; Zhu, Shenmin; Zhang, Di

    2016-03-01

    When the constitutive materials of photonic crystals (PCs) are stimuli-responsive, the resultant PCs exhibit optical properties that can be tuned by the stimuli. This can be exploited for promising applications in colour displays, biological and chemical sensors, inks and paints, and many optically active components. However, the preparation of the required photonic structures is the first issue to be solved. In the past two decades, approaches such as microfabrication and self-assembly have been developed to incorporate stimuli-responsive materials into existing periodic structures for the fabrication of PCs, either as the initial building blocks or as the surrounding matrix. Generally, the materials that respond to thermal, pH, chemical, optical, electrical, or magnetic stimuli are either soft or aggregate, which is why the manufacture of three-dimensional hierarchical photonic structures with responsive properties is a great challenge. Recently, inspired by biological PCs in nature which exhibit both flexible and responsive properties, researchers have developed various methods to synthesize metals and metal oxides with hierarchical structures by using a biological PC as the template. This review will focus on the recent developments in this field. In particular, PCs with biological hierarchical structures that can be tuned by external stimuli have recently been successfully fabricated. These findings offer innovative insights into the design of responsive PCs and should be of great importance for future applications of these materials.

  17. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    MarcR.Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  18. Hierarchical photonic structured stimuli-responsive materials as high-performance colorimetric sensors.

    Science.gov (United States)

    Lu, Tao; Zhu, Shenmin; Chen, Zhixin; Wang, Wanlin; Zhang, Wang; Zhang, Di

    2016-05-21

    Hierarchical photonic structures in nature are of special interest because they can be used as templates for fabrication of stimuli-responsive photonic crystals (PCs) with unique structures beyond man-made synthesis. The current stimuli-responsive PCs templated directly from natural PCs showed a very weak external stimuli response and poor durability due to the limitations of natural templates. Herein, we tackle this problem by chemically coating functional polymers, polyacrylamide, on butterfly wing scales which have hierarchical photonic structures. As a result of the combination of the strong water absorption properties of the polyacrylamide and the PC structures of the butterfly wing scales, the designed materials demonstrated excellent humidity responsive properties and a tremendous colour change. The colour change is induced by the refractive index change which is in turn due to the swollen nature of the polymer when the relative humidity changes. The butterfly wing scales also showed an excellent durability which is due to the chemical bonds formed between the polymer and wing scales. The synthesis strategy provides an avenue for the promising applications of stimuli-responsive PCs with hierarchical structures. PMID:27128843

  19. Hierarchical computation in the canonical auditory cortical circuit

    OpenAIRE

    Atencio, Craig A.; Sharpee, Tatyana O.; Schreiner, Christoph E.

    2009-01-01

    Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-trig...

  20. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  1. Determination of hemispheric language dominance using functional MRI : comparison of visual and auditory stimuli

    International Nuclear Information System (INIS)

    To assess the difference between auditory and visual stimuli when determining hemispheric language dominance by using functional MRI. In ten healthy adult volunteers (8 right-handed, 1 left-handed, 1 ambidextrous), motor language activation in axial slices of frontal lobe was mapped on a Simens 1.5T Vision Plus system using single-shot EPI. Series of 120 consecutive images per section were acquired during three cycles of task activation and rest. During each activation, a series of four syllables was delivered by means of both a visual and auditory method, and the volunteers were asked to mentally generate words starting with each syllable. In both in ferior frontal gyri and whole frontal lobes, lateralization indices were calculated from the activated pixels. We determined the language dominant hemisphere, and compared the results of the visual method and the auditory method. Seven right-handed persons were left-hemisphere dominant, and one left-handed and one ambidex-trous person were right-hemisphere dominant. Five of nine persons demonstrated larger lateralization indices with the auditory method than the visual method, while the remaining four showed larger lateralization indices with the visual method. No statistically significant difference was noted when comparing the results of the two methods(p>0.05). When determining hemispheric language dominance using functional MRI, the two methods are equally appropriate

  2. Auditory detection of non-speech and speech stimuli in noise: Native speech advantage.

    Science.gov (United States)

    Huo, Shuting; Tao, Sha; Wang, Wenjing; Li, Mingshuang; Dong, Qi; Liu, Chang

    2016-05-01

    Detection thresholds of Chinese vowels, Korean vowels, and a complex tone, with harmonic and noise carriers were measured in noise for Mandarin Chinese-native listeners. The harmonic index was calculated as the difference between detection thresholds of the stimuli with harmonic carriers and those with noise carriers. The harmonic index for Chinese vowels was significantly greater than that for Korean vowels and the complex tone. Moreover, native speech sounds were rated significantly more native-like than non-native speech and non-speech sounds. The results indicate that native speech has an advantage over other sounds in simple auditory tasks like sound detection. PMID:27250202

  3. Voluntary movement affects simultaneous perception of auditory and tactile stimuli presented to a non-moving body part.

    Science.gov (United States)

    Hao, Qiao; Ora, Hiroki; Ogawa, Ken-Ichiro; Ogata, Taiki; Miyake, Yoshihiro

    2016-01-01

    The simultaneous perception of multimodal sensory information has a crucial role for effective reactions to the external environment. Voluntary movements are known to occasionally affect simultaneous perception of auditory and tactile stimuli presented to the moving body part. However, little is known about spatial limits on the effect of voluntary movements on simultaneous perception, especially when tactile stimuli are presented to a non-moving body part. We examined the effect of voluntary movement on the simultaneous perception of auditory and tactile stimuli presented to the non-moving body part. We considered the possible mechanism using a temporal order judgement task under three experimental conditions: voluntary movement, where participants voluntarily moved their right index finger and judged the temporal order of auditory and tactile stimuli presented to their non-moving left index finger; passive movement; and no movement. During voluntary movement, the auditory stimulus needed to be presented before the tactile stimulus so that they were perceived as occurring simultaneously. This subjective simultaneity differed significantly from the passive movement and no movement conditions. This finding indicates that the effect of voluntary movement on simultaneous perception of auditory and tactile stimuli extends to the non-moving body part. PMID:27622584

  4. Suppression to visual, auditory and gustatory stimuli habituates normally in rats with excitotoxic lesions of the perirhinal cortex

    OpenAIRE

    Robinson, Jasper; Sanderson, David J.; Aggleton, John P.; Jenkins, Trisha A.

    2009-01-01

    In 3 habituation experiments, rats with excitotoxic lesions of the perirhinal cortex were found to be indistinguishable from control rats. Two of the habituation experiments examined the habituation of suppression of responding on an appetitive, instrumental baseline. One of those experiments used stimuli selected from the visual modality (lights), the other used auditory stimuli. The third experiment examined habituation of suppression of novel-flavored water consumption. In contrast to the ...

  5. Incidental categorization of spectrally complex non-invariant auditory stimuli in a computer game task

    Science.gov (United States)

    Wade, Travis; Holt, Lori L.

    2005-10-01

    This study examined perceptual learning of spectrally complex nonspeech auditory categories in an interactive multi-modal training paradigm. Participants played a computer game in which they navigated through a three-dimensional space while responding to animated characters encountered along the way. Characters' appearances in the game correlated with distinctive sound category distributions, exemplars of which repeated each time the characters were encountered. As the game progressed, the speed and difficulty of required tasks increased and characters became harder to identify visually, so quick identification of approaching characters by sound patterns was, although never required or encouraged, of gradually increasing benefit. After 30 min of play, participants performed a categorization task, matching sounds to characters. Despite not being informed of audio-visual correlations, participants exhibited reliable learning of these patterns at posttest. Categorization accuracy was related to several measures of game performance and category learning was sensitive to category distribution differences modeling acoustic structures of speech categories. Category knowledge resulting from the game was qualitatively different from that gained from an explicit unsupervised categorization task involving the same stimuli. Results are discussed with respect to information sources and mechanisms involved in acquiring complex, context-dependent auditory categories, including phonetic categories, and to multi-modal statistical learning.

  6. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    Science.gov (United States)

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  7. Distributed functions of detection and discrimination of vibrotactile stimuli in the hierarchical human somatosensory system

    Directory of Open Access Journals (Sweden)

    Junsuk eKim

    2015-01-01

    Full Text Available According to the hierarchical view of human somatosensory network, somatic sensory information is relayed from the thalamus to primary somatosensory cortex (S1, and then distributed to adjacent cortical regions to perform further perceptual and cognitive functions. Although a number of neuroimaging studies have examined neuronal activity correlated with tactile stimuli, comparatively less attention has been devoted toward understanding how vibrotactile stimulus information is processed in the hierarchical somatosensory cortical network. To explore the hierarchical perspective of tactile information processing, we studied two cases: (a discrimination between the locations of finger stimulation, and (b detection of stimulation against no stimulation on individual fingers, using both standard general linear model (GLM and searchlight multi-voxel pattern analysis (MVPA techniques. These two cases were studied on the same data set resulting from a passive vibrotactile stimulation experiment. Our results showed that vibrotactile stimulus locations on fingers could be discriminated from measurements of human functional magnetic resonance imaging (fMRI. In particular, it was in case (a where we observed activity in contralateral posterior parietal cortex (PPC and supramarginal gyrus (SMG but not in S1, while in case (b we found significant cortical activations in S1 but not in PPC and SMG. These discrepant observations suggest the functional specialization with regard to vibrotactile stimulus locations, especially, the hierarchical information processing in the human somatosensory cortical areas. Our findings moreover support the general understanding that S1 is the main sensory receptive area for the sense of touch, and adjacent cortical regions (i.e., PPC and SMG are in charge of a higher level of processing and may thus contribute most for the successful classification between stimulated finger locations.

  8. Effect of complex treatment using visual and auditory stimuli on the symptoms of attention deficit/hyperactivity disorder in children.

    Science.gov (United States)

    Park, Mi-Sook; Byun, Ki-Won; Park, Yong-Kyung; Kim, Mi-Han; Jung, Sung-Hwa; Kim, Hong

    2013-04-01

    We investigated the effects of complex treatment using visual and auditory stimuli on the symptoms of attention deficit/hyperactivity disorder (ADHD) in children. Forty-seven male children (7-13 yr old), who were clinically diagnosed with ADHD at the Balance Brain Center in Seoul, Korea, were included in this study. The complex treatment consisted of visual and auditory stimuli, core muscle exercise, targeting ball exercise, ocular motor exercise, and visual motor integration. All subjects completed the complex treatment for 60 min/day, 2-3 times/week for more than 12 weeks. Data regarding visual and auditory reaction time and cognitive function were obtained using the Neurosync program, Stroop Color-Word Test, and test of nonverbal intelligence (TONI) at pre- and post-treatment. The complex treatment significantly decreased the total reaction time, while it increased the number of combo actions on visual and auditory stimuli (PStroop color, word, and color-word scores were significantly increased at post-treatment compared to the scores at pretreatment (Peffective ADHD intervention. PMID:24278878

  9. Directionality of auditory nerve fiber responses to pure tone stimuli in the grassfrog, Rana temporaria. I. Spike rate responses

    DEFF Research Database (Denmark)

    Jørgensen, M B; Christensen-Dalsgaard, J

    1997-01-01

    We studied the directionality of spike rate responses of auditory nerve fibers of the grassfrog, Rana temporaria, to pure tone stimuli. All auditory fibers showed spike rate directionality. The strongest directionality was seen at low frequencies (200-400 Hz), where the spike rate could change by...

  10. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h

  11. Slow wave changes in amygdala to visual, auditory, and social stimuli following lesions of the inferior temporal cortex in squirrel monkey (Saimiri sciureus).

    Science.gov (United States)

    Kling, A S; Lloyd, R L; Perryman, K M

    1987-01-01

    Radiotelemetry of slow wave activity of the amygdala was recorded under a variety of conditions. Power, and the percentage of power in the delta band, increased in response to stimulation. Recordings of monkey vocalizations and slides of ethologically relevant, natural objects produced a greater increase in power than did control stimuli. The responses to auditory stimuli increased when these stimuli were presented in an unrestrained, group setting, yet the responses to the vocalizations remained greater than those following control stimuli. Both the natural auditory and visual stimuli produced a reliable hierarchy with regard to the magnitude of response. Following lesions of inferior temporal cortex, these two hierarchies are disrupted, especially in the auditory domain. Further, these same stimuli, when presented after the lesion, produced a decrease, rather than an increase, in power. Nevertheless, the power recorded from the natural stimuli was still greater than that recorded from control stimuli in that the former produced less of a decrease in power, following the lesion, than did the latter. These data, in conjunction with a parallel report on evoked potentials in the amygdala, before and after cortical lesions, lead us to conclude that sensory information, particularly auditory, available to the amygdala, following the lesion, is substantially the same, and that it is the interpretation of this information, by the amygdala, which is altered by the cortical lesion. PMID:3566692

  12. Auditory evoked potentials in the auditory system of a beluga whale Delphinapterus leucas to prolonged sound stimuli.

    Science.gov (United States)

    Popov, Vladimir V; Sysueva, Evgenia V; Nechaev, Dmitry I; Rozhnov, Vyatcheslav V; Supin, Alexander Ya

    2016-03-01

    The effects of prolonged (up to 1500 s) sound stimuli (tone pip trains) on evoked potentials (the rate following response, RFR) were investigated in a beluga whale. The stimuli (rhythmic tone pips) were of frequencies of 45, 64, and 90 kHz at levels from 20 to 60 dB above threshold. Two experimental protocols were used: short- and long-duration. For the short-duration protocol, the stimuli were 500-ms-long pip trains that repeated at a rate of 0.4 trains/s. For the long-duration protocol, the stimuli were continuous pip successions lasting up to 1500 s. The RFR amplitude gradually decreased by three to seven times from 10 ms to 1500 s of stimulation. Decrease of response amplitude during stimulation was approximately proportional to initial (at the start of stimulation) response amplitude. Therefore, even for low stimulus level (down to 20 dB above the baseline threshold) the response was never suppressed completely. The RFR amplitude decay that occurred during stimulation could be satisfactorily approximated by a combination of two exponents with time constants of 30-80 ms and 3.1-17.6 s. The role of adaptation in the described effects and the impact of noise on the acoustic orientation of odontocetes are discussed. PMID:27036247

  13. Sex differences in the representation of call stimuli in a songbird secondary auditory area.

    Science.gov (United States)

    Giret, Nicolas; Menardy, Fabien; Del Negro, Catherine

    2015-01-01

    Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM), while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer, and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird's own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of information about the

  14. Sex differences in the representation of call stimuli in a songbird secondary auditory area

    Directory of Open Access Journals (Sweden)

    Nicolas eGiret

    2015-10-01

    Full Text Available Understanding how communication sounds are encoded in the central auditory system is critical to deciphering the neural bases of acoustic communication. Songbirds use learned or unlearned vocalizations in a variety of social interactions. They have telencephalic auditory areas specialized for processing natural sounds and considered as playing a critical role in the discrimination of behaviorally relevant vocal sounds. The zebra finch, a highly social songbird species, forms lifelong pair bonds. Only male zebra finches sing. However, both sexes produce the distance call when placed in visual isolation. This call is sexually dimorphic, is learned only in males and provides support for individual recognition in both sexes. Here, we assessed whether auditory processing of distance calls differs between paired males and females by recording spiking activity in a secondary auditory area, the caudolateral mesopallium (CLM, while presenting the distance calls of a variety of individuals, including the bird itself, the mate, familiar and unfamiliar males and females. In males, the CLM is potentially involved in auditory feedback processing important for vocal learning. Based on both the analyses of spike rates and temporal aspects of discharges, our results clearly indicate that call-evoked responses of CLM neurons are sexually dimorphic, being stronger, lasting longer and conveying more information about calls in males than in females. In addition, how auditory responses vary among call types differ between sexes. In females, response strength differs between familiar male and female calls. In males, temporal features of responses reveal a sensitivity to the bird’s own call. These findings provide evidence that sexual dimorphism occurs in higher-order processing areas within the auditory system. They suggest a sexual dimorphism in the function of the CLM, contributing to transmit information about the self-generated calls in males and to storage of

  15. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    OpenAIRE

    Ilona eCroy; Kerstin eLaqua; Frank eSuess; Peter eJoraschky; Tjalf eZiemssen; Thomas eHummel

    2013-01-01

    Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of e...

  16. The sensory channel of presentation alters subjective ratings and autonomic responses toward disgusting stimuli – Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli

    OpenAIRE

    Croy, Ilona; Laqua, Kerstin; Süß, Frank; Joraschky, Peter; Ziemssen, Tjalf; Hummel, Thomas

    2014-01-01

    Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors, or tactile stimuli. Therefore, disgust experience evoked by four different sensory channels was compared. A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile, and olfactory channel. Ratings ...

  17. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich;

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  18. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli.

    Science.gov (United States)

    Denham, Susan; Bõhm, Tamás M; Bendixen, Alexandra; Szalárdy, Orsolya; Kocsis, Zsuzsanna; Mill, Robert; Winkler, István

    2014-01-01

    The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the "ABA-" auditory streaming paradigm we trained listeners until they could reliably recognize all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated). Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e., the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing) perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in human perception. PMID

  19. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    Directory of Open Access Journals (Sweden)

    SusanDenham

    2014-02-01

    Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in

  20. The sensory channel of presentation alters subjective ratings and autonomic responses towards disgusting stimuli -Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli-

    Directory of Open Access Journals (Sweden)

    Ilona eCroy

    2013-09-01

    Full Text Available Disgust causes specific reaction patterns, observable in mimic responses and body reactions. Most research on disgust deals with visual stimuli. However, pictures may cause another disgust experience than sounds, odors or tactile stimuli. Therefore disgust experience evoked by four different sensory channels was compared.A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile and olfactory channel. Ratings of evoked disgust as well as responses of the autonomic nervous system (heart rate, skin conductance level, systolic blood pressure were recorded and the effect of stimulus labeling and of repeated presentation was analyzed. Ratings suggested that disgust could be evoked through all senses; they were highest for visual stimuli. However, autonomic reaction towards disgusting stimuli differed according to the channel of presentation. In contrast to the other, olfactory disgust stimuli provoked a strong decrease of systolic blood pressure. Additionally, labeling enhanced disgust ratings and autonomic reaction for olfactory and tactile, but not for visual and auditory stimuli. Repeated presentation indicated that participant’s disgust rating diminishes to all but olfactory disgust stimuli. Taken together we argue that the sensory channel through which a disgust reaction is evoked matters.

  1. Musical Brains. A study of evoked musical sensations without external auditory stimuli. Preliminary report of three cases

    International Nuclear Information System (INIS)

    Background: There are individuals, usually musicians, who are seemingly able to evoke musical sensations without external auditory stimuli. However, to date there is no available evidence to determine if it is feasible to have musical sensations without using external sensory receptors nor if there is a biological substrate to these sensations. Study design: Two single photon emission computerized tomography (SPECT) evaluations with [99mTc]-HMPAO were conducted in each of three female musicians. One was done under basal conditions (without evoking) and the other one while evoking these sensations. Results: In the NeuroSPECT studies of the musicians who were tested while evoking a musical composition, there was a significant increase in perfusion above the normal mean in the right and left hemispheres in Brodmann's areas 9 and 8 (frontal executive area) and in areas 40 on the left side (auditory center). However, under basal conditions there was no hyper perfusion of areas 9, 8, 39 and 40. In one case hyper perfusion was found under basal conditions in area 45, however it was less than when she was evoking. Conclusions: These findings are suggestive of a biological substrate to the process of evoking musical sensations (au)

  2. Behavioural evidence of a dissociation between voice gender categorization and phoneme categorization using auditory morphed stimuli

    Directory of Open Access Journals (Sweden)

    CyrilRPernet

    2014-01-01

    Full Text Available Both voice gender and speech perception rely on neuronal populations located in the peri-sylvian areas. However, whilst functional imaging studies suggest a left versus right hemisphere and anterior versus posterior dissociation between voice and speech categorization, psycholinguistic studies on talker variability suggest that these two processes (voice and speech categorization share common mechanisms. In this study, we investigated the categorical perception of voice gender (male vs. female and phonemes (/pa/ vs. /ta/ using the same stimulus continua generated by morphing. This allowed the investigation of behavioural differences while controlling acoustic characteristics, since the same stimuli were used in both tasks. Despite a higher acoustic dissimilarity between items during the phoneme categorization task (a male and female voice producing the same phonemes than the gender task (the same person producing 2 phonemes, results showed that speech information is being processed much faster than voice information. In addition, f0 or timbre equalization did not affect RT, which disagrees with the classical psycholinguistic models in which voice information is stripped away or normalized to access phonetic content. Also, despite similar response (percentages and perceptual (d’ curves, a reverse correlation analysis on acoustic features revealed, as expected, that the formant frequencies of the consonant distinguished stimuli in the phoneme task, but that only the vowel formant frequencies distinguish stimuli in the gender task. The 2nd set of results thus also disagrees with models postulating that the same acoustic information is used for voice and speech. Altogether these results suggest that voice gender categorization and phoneme categorization are dissociated at an early stage on the basis of different enhanced acoustic features that are diagnostic to the task at hand.

  3. Effect of heroin-conditioned auditory stimuli on cerebral functional activity in rats

    International Nuclear Information System (INIS)

    Cerebral functional activity was measured as changes in distribution of the free fatty acid [1-14C]octanoate in autoradiograms obtained from rats during brief presentation of a tone previously paired to infusions of heroin or saline. Rats were trained in groups of three consisting of one heroin self-administering animal and two animals receiving yoked infusions of heroin or saline. Behavioral experiments in separate groups of rats demonstrated that these training parameters imparts secondary reinforcing properties to the tone for animals self-administering heroin while the tone remains behaviorally neutral in yoked-infusion animals. The optical densities of thirty-seven brain regions were normalized to a relative index for comparisons between groups. Previous pairing of the tone to heroin infusions irrespective of behavior (yoked-heroin vs. yoked-saline groups) produced functional activity changes in fifteen brain areas. In addition, nineteen regional differences in octanoate labeling density were evident when comparison was made between animals previously trained to self-administer heroin to those receiving yoked-heroin infusions, while twelve differences were noted when comparisons were made between the yoked vehicle and self administration group. These functional activity changes are presumed related to the secondary reinforcing capacity of the tone acquired by association with heroin, and may identify neural substrates involved in auditory signalled conditioning of positive reinforcement to opiates

  4. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Science.gov (United States)

    Will, Udo; Clayton, Martin; Wertheim, Ira; Leante, Laura; Berg, Eric

    2015-01-01

    Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting 'social' effects of temporally regular music. PMID:25849357

  5. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Directory of Open Access Journals (Sweden)

    Udo Will

    Full Text Available Pulse is often understood as a feature of a (quasi- isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani classical music performances (alap, jor and jhala. The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic experience and may be an important process supporting 'social' effects of temporally regular music.

  6. Hierarchical and serial processing in the spatial auditory cortical pathway is degraded by natural aging

    OpenAIRE

    Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Gregg H Recanzone

    2010-01-01

    The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and dri...

  7. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  8. Laterality of pain in migraine distinguished by interictal rates of habituation of electrodermal responses to visual and auditory stimuli.

    OpenAIRE

    Gruzelier, J. H.; Nicolaou, T; Connolly, J. F.; Peatfield, R. C.; Davies, P T; Clifford-Rose, F

    1987-01-01

    Support is provided for a primary neural factor in migraine by studies in autonomic responsiveness to sensory stimuli in relation to the laterality of pain. Migraineurs with consistently lateralised headaches were found in two studies to exhibit extremes of autonomic responsiveness to sensory stimuli during the interictal phase. The direction of responsiveness was predictive of the laterality of pain; left-sided pain was associated with under-responsiveness and fast habituation, right-sided p...

  9. AROUSAL-RELATED P3a TO NOVEL AUDITORY STIMULI IS ABOLISHED BY MODERATELY LOW ALCOHOL DOSE

    OpenAIRE

    Marinkovic, Ksenija; Halgren, Eric; Maltzman, Irving

    2001-01-01

    Concurrent measures of event-related potentials (ERPs) and skin conductance responses were obtained in an auditory oddball task consisting of rare target, rare non-signal unique novel and frequent standard tones. Twelve right-handed male social drinkers participated in all four cells of the balanced placebo design in which effects of beverage and instructions as to the beverage content (expectancy) were independently manipulated. The beverage contained either juice only, or vodka mixed with j...

  10. An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user

    Directory of Open Access Journals (Sweden)

    Nadine eSimon

    2015-01-01

    Full Text Available Brain-computer interfaces (BCIs can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g. in the completely locked-in state or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on two consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min that increased to 90% (4.23 bits/min on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  11. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth Hendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  12. Electrophysiological changes elicited by auditory stimuli given a positive or negative value: a study comparing anhedonic with hedonic subjects.

    Science.gov (United States)

    Pierson, A; Ragot, R; Ripoche, A; Lesevre, N

    1987-07-01

    The present experiment investigates in 'normal' subjects the relationship between personality characteristics (anhedonia versus hedonia) and the influence of the affective value of acoustic stimuli (positive, negative, neutral) on various electrophysiological indices reflecting either tonic activation or phasic arousal (EEG power spectra, contingent negative variation: CNV, heart rate, skin potential responses: SPR) as well as on behavioural indices (reaction time: RT). Eighteen subjects were divided into two groups according to their scores at two self-rating questionnaires, the Chapman's Physical Anhedonia Scale (PAS) and the Beck-Weissman's Dysfunctional Attitude Scale (DAS) that quantifies cognitive distortions presumed to constitute high risk for depression: 9 with high scores at both scales formed the A group (Anhedonic-dysfunctional), 9 with low scores at both scales, the H group (Hedonic-adapted) The electrophysiological indices were recorded during 3 situations: the first one was a classical CNV paradigm with a motor reaction time task in which one of 3 tones of different pitch represented the warning stimulus S1; during the second, conditioning phase, two of these tones were associated with either a success (and reward) or a failure (and punishment) during a memory task in order to make them acquire either a positive or a negative affective value; the third situation consisted in the repeating of the first CNV paradigm in order to test the effect of the positive and the negative stimuli versus the neutral one on RTs and electrophysiological data. Significant between-group differences were found regarding tonic activation as well as phasic arousal indices from the very beginning of the experiment when all stimuli were neutral ones, the anhedonics exhibiting higher activation and arousal than the hedonics at the cortical (increased CNV amplitude, increased power in the beta frequency band), cardiovascular (higher heart rate habituating more slowly) and

  13. Representations of modality-specific affective processing for visual and auditory stimuli derived from functional magnetic resonance imaging data.

    Science.gov (United States)

    Shinkareva, Svetlana V; Wang, Jing; Kim, Jongwan; Facciani, Matthew J; Baucom, Laura B; Wedell, Douglas H

    2014-07-01

    There is converging evidence that people rapidly and automatically encode affective dimensions of objects, events, and environments that they encounter in the normal course of their daily routines. An important research question is whether affective representations differ with sensory modality. This research examined the nature of the dependency of affect and sensory modality at a whole-brain level of analysis in an incidental affective processing paradigm. Participants were presented with picture and sound stimuli that differed in positive or negative valence in an event-related functional magnetic resonance imaging experiment. Global statistical tests, applied at a level of the individual, demonstrated significant sensitivity to valence within modality, but not valence across modalities. Modality-general and modality-specific valence hypotheses predict distinctly different multidimensional patterns of the stimulus conditions. Examination of lower dimensional representation of the data demonstrated separable dimensions for valence processing within each modality. These results provide support for modality-specific valence processing in an incidental affective processing paradigm at a whole-brain level of analysis. Future research should further investigate how stimulus-specific emotional decoding may be mediated by the physical properties of the stimuli. PMID:24302696

  14. Influence of selective attention on implicit learning with auditory stimuli%选择性注意对听觉内隐学习的影响

    Institute of Scientific and Technical Information of China (English)

    李秀君; 石文典

    2016-01-01

    内隐学习被认为是人类无意识、无目的获得复杂规则的自动化过程。已有研究表明,在人工语法学习范式下,视觉内隐学习的发生需要选择性注意。为了考察选择性注意对内隐学习的影响是否具有通道特异性,本研究以90名大学生为被试,以人工语法为学习任务,采用双耳分听技术,在听觉通道同时呈现具有不同规则的字母序列和数字序列,考查被试在听觉刺激下对注意序列和未注意序列构成规则的习得情况。结果发现:只有选择注意的序列规则被习得,未选择注意的序列规则未能被习得。研究表明:在人工语法学习范式下,只有选择注意的刺激维度能够发生内隐学习。选择性注意对内隐学习的影响具有跨通道的适用性,不仅适用于视觉刺激,也同样适用于听觉刺激。%Implicit learning refers to people’s tendency to acquire complex regularities or patterns without intention or awareness (Reber, 1989). Given regularities are acquired without intention, and largely unconsciousness, implicit learning is often considered to occur without attention. The processes responsible for such learning were once contrasted with a selective intentional “system” (Guo et al., 2013; Jiang & Leung, 2005). However, more recent researches show that actually implicit learning processes are highly selective (Eitam, schul, & Hassin, 2009; Eitam et al., 2013; Tanaka, Kiyokawa, Yamada, Dienes, & Shigemasu, 2008; Weiermann & Meier, 2012). Therefore it is necessary to do more exploration about the roles of attention in implicit learning. So far, all previous Artificial Grammar Learning (AGL) studies used visual stimuli. Thus, it remains unclear whether AGL may be due to the presence of a visual regularity. To investigate the generality of effect of selective attention on AGL, we extend the experimental materials to auditory stimuli. 90 college students were recruited in two

  15. Signaled two-way avoidance learning using electrical stimulation of the inferior colliculus as negative reinforcement: effects of visual and auditory cues as warning stimuli

    Directory of Open Access Journals (Sweden)

    A.C. Troncoso

    1998-03-01

    Full Text Available The inferior colliculus is a primary relay for the processing of auditory information in the brainstem. The inferior colliculus is also part of the so-called brain aversion system as animals learn to switch off the electrical stimulation of this structure. The purpose of the present study was to determine whether associative learning occurs between aversion induced by electrical stimulation of the inferior colliculus and visual and auditory warning stimuli. Rats implanted with electrodes into the central nucleus of the inferior colliculus were placed inside an open-field and thresholds for the escape response to electrical stimulation of the inferior colliculus were determined. The rats were then placed inside a shuttle-box and submitted to a two-way avoidance paradigm. Electrical stimulation of the inferior colliculus at the escape threshold (98.12 ± 6.15 (A, peak-to-peak was used as negative reinforcement and light or tone as the warning stimulus. Each session consisted of 50 trials and was divided into two segments of 25 trials in order to determine the learning rate of the animals during the sessions. The rats learned to avoid the inferior colliculus stimulation when light was used as the warning stimulus (13.25 ± 0.60 s and 8.63 ± 0.93 s for latencies and 12.5 ± 2.04 and 19.62 ± 1.65 for frequencies in the first and second halves of the sessions, respectively, P0.05 in both cases. Taken together, the present results suggest that rats learn to avoid the inferior colliculus stimulation when light is used as the warning stimulus. However, this learning process does not occur when the neutral stimulus used is an acoustic one. Electrical stimulation of the inferior colliculus may disturb the signal transmission of the stimulus to be conditioned from the inferior colliculus to higher brain structures such as amygdala

  16. Cortical Suppression to Delayed Self-Initiated Auditory Stimuli in Schizotypy: Neurophysiological Evidence for a Continuum of Psychosis.

    Science.gov (United States)

    Oestreich, Lena K L; Mifsud, Nathan G; Ford, Judith M; Roach, Brian J; Mathalon, Daniel H; Whitford, Thomas J

    2016-01-01

    Schizophrenia patients have been shown to exhibit subnormal levels of electrophysiological suppression to self-initiated, button press elicited sounds. These self-suppression deficits have been shown to improve following the imposition of a subsecond delay between the button press and the evoked sound. The current study aimed to investigate whether nonclinical individuals who scored highly on the personality dimension of schizotypy would exhibit similar patterns of self-suppression abnormalities to those exhibited in schizophrenia. Thirty-nine nonclinical individuals scoring above the median (High Schizotypy) and 41 individuals scoring below the median (Low Schizotypy) on the Schizotypal Personality Questionnaire (SPQ) underwent electroencephalographic recording. The amplitude of the N1-component was calculated while participants (1) listened to tones initiated by a willed button press and played back with varying delay periods between the button press and the tone (Active conditions) and (2) passively listened to a series of tones (Listen condition). N1-suppression was calculated by subtracting the amplitude of the N1-component of the auditory evoked potential in the Active condition from that of the Listen condition, while controlling for the activity evoked by the button press per se. The Low Schizotypy group exhibited significantly higher levels of N1-suppression to undelayed tones compared to the High Schizotypy group. Furthermore, while N1-suppression was found to decrease linearly with increasing delays between the button press and the tone in the Low Schizotypy group, this was not the case in the High Schizotypy group. The findings of this study suggest that nonclinical, highly schizotypal individuals exhibit subnormal levels of N1-suppression to undelayed self-initiated tones and an abnormal pattern of N1-suppression to delayed self-initiated tones. To the extent that these results are similar to those previously reported in patients with schizophrenia

  17. Altered processing of acoustic stimuli during sleep: reduced auditory activation and visual deactivation detected by a combined fMRI/EEG study.

    Science.gov (United States)

    Czisch, Michael; Wetter, Thomas C; Kaufmann, Christian; Pollmächer, Thomas; Holsboer, Florian; Auer, Dorothee P

    2002-05-01

    Although there is evidence that acoustic stimuli are processed differently during sleep and wakefulness, little is known about the underlying neuronal mechanisms. In the present study, the processing of an acoustic stimulus was investigated during different non rapid eye movement (NREM) sleep stages using a combined EEG/fMRI approach in healthy human volunteers: A text stimulus was presented to sleep-deprived subjects prior to and after the onset of sleep, and single-slice silent fMRI were acquired. We found significantly different blood oxygenation level-dependent (BOLD) contrast responses during sleep compared to wakefulness. During NREM sleep stages 1 and 2 and during slow wave sleep (SWS) we observed reduced activation in the auditory cortex and a pronounced negative signal in the visual cortex and precuneus. Acoustic stimulation during sleep was accompanied by an increase in EEG frequency components in the low delta frequency range. Provided that neurovascular coupling is not altered during sleep, the negative transmodal BOLD response which is most pronounced during NREM sleep stages 1 and 2 reflects a deactivation predominantly in the visual cortex suggesting that this decrease in neuronal activity protects the brain from the arousing effects of external stimulation during sleep not only in the primary targeted sensory cortex but also in other brain regions. PMID:11969332

  18. Visual tracking of auditory stimuli.

    Science.gov (United States)

    Stream, R W; Whitson, E T; Honrubia, V

    1980-07-01

    A white noise sound stimulus was emitted successively in an anechoic chamber across 24 loudspeakers equally spaced in the horizontal plane in a semicircle with diameter of 11 ft. Eye movements produced by each of 20 normal-hearing young adults in the center of this arc who tracked the sound at 10 different velocities (15--180 degrees/sec) were recorded with standard ENG methods. During each rotating cycle of the stimulus the eyes were able to follow the sound with discrete saccades, but did not produce nystagmic-like movements. Increased stimulus velocity resulted in (1) diminution of the amplitude of the tracking cycles, (2) decrease in the number of saccades, and (3) increase in the average velocity of the eye. Ss performed better with lights on than off. The additional quantitative findings from the present study further indicate the limitation in the ability of human Ss to localize a moving acoustic source in space. PMID:7347744

  19. Feedforward and feedback projections of caudal belt and parabelt areas of auditory cortex: refining the hierarchical model

    Directory of Open Access Journals (Sweden)

    TroyAHackett

    2014-04-01

    Full Text Available Our working model of the primate auditory cortex recognizes three major regions (core, belt, parabelt, subdivided into thirteen areas. The connections between areas are topographically ordered in a manner consistent with information flow along two major anatomical axes: core-belt-parabelt and caudal-rostral. Remarkably, most of the connections supporting this model were revealed using retrograde tracing techniques. Little is known about laminar circuitry, as anterograde tracing of axon terminations has rarely been used. The purpose of the present study was to examine the laminar projections of three areas of auditory cortex, pursuant to analysis of all areas. The selected areas were: middle lateral belt (ML; caudomedial belt (CM; and caudal parabelt (CPB. Injections of anterograde tracers yielded data consistent with major features of our model, and also new findings that compel modifications. Results supporting the model were: 1 feedforward projection from ML and CM terminated in CPB; 2 feedforward projections from ML and CPB terminated in rostral areas of the belt and parabelt; and 3 feedback projections typified inputs to the core region from belt and parabelt. At odds with the model was the convergence of feedforward inputs into rostral medial belt from ML and CPB. This was unexpected since CPB is at a higher stage of the processing hierarchy, with mainly feedback projections to all other belt areas. Lastly, extending the model, feedforward projections from CM, ML, and CPB overlapped in the temporal parietal occipital area (TPO in the superior temporal sulcus, indicating significant auditory influence on sensory processing in this region. The combined results refine our working model and highlight the need to complete studies of the laminar inputs to all areas of auditory cortex. Their documentation is essential for developing informed hypotheses about the neurophysiological influences of inputs to each layer and area.

  20. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565

  1. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution

    OpenAIRE

    Hertz, Uri; Amedi, Amir

    2014-01-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli...

  2. Visual–auditory spatial processing in auditory cortical neurons

    OpenAIRE

    Bizley, Jennifer K.; King, Andrew J

    2008-01-01

    Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...

  3. Adaptation in the auditory system: an overview

    OpenAIRE

    David ePérez-González; Malmierca, Manuel S.

    2014-01-01

    The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...

  4. Behavioral correlates of auditory streaming in rhesus macaques

    OpenAIRE

    Christison-Lagay, Kate L.; Cohen, Yale E.

    2013-01-01

    Perceptual representations of auditory stimuli (i.e., sounds) are derived from the auditory system’s ability to segregate and group the spectral, temporal, and spatial features of auditory stimuli—a process called “auditory scene analysis”. Psychophysical studies have identified several of the principles and mechanisms that underlie a listener’s ability to segregate and group acoustic stimuli. One important psychophysical task that has illuminated many of these principles and mechanisms is th...

  5. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David Pérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  6. Auditory Processing Disorders

    Science.gov (United States)

    Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...

  7. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    . Sensorineural hearing impairments is commonly associated with a loss of outer hair-cell functionality, and a measurable consequence is the decreased amount of cochlear compression at frequencies corresponding to the damaged locations in the cochlea. In clinical diagnostics, a fast and objective measure of local...

  8. Multi-Scale Entrainment of Coupled Neuronal Oscillations in Primary Auditory Cortex.

    Science.gov (United States)

    O'Connell, M N; Barczak, A; Ross, D; McGinnis, T; Schroeder, C E; Lakatos, P

    2015-01-01

    Earlier studies demonstrate that when the frequency of rhythmic tone sequences or streams is task relevant, ongoing excitability fluctuations (oscillations) of neuronal ensembles in primary auditory cortex (A1) entrain to stimulation in a frequency dependent way that sharpens frequency tuning. The phase distribution across A1 neuronal ensembles at time points when attended stimuli are predicted to occur reflects the focus of attention along the spectral attribute of auditory stimuli. This study examined how neuronal activity is modulated if only the temporal features of rhythmic stimulus streams are relevant. We presented macaques with auditory clicks arranged in 33 Hz (gamma timescale) quintets, repeated at a 1.6 Hz (delta timescale) rate. Such multi-scale, hierarchically organized temporal structure is characteristic of vocalizations and other natural stimuli. Monkeys were required to detect and respond to deviations in the temporal pattern of gamma quintets. As expected, engagement in the auditory task resulted in the multi-scale entrainment of delta- and gamma-band neuronal oscillations across all of A1. Surprisingly, however, the phase-alignment, and thus, the physiological impact of entrainment differed across the tonotopic map in A1. In the region of 11-16 kHz representation, entrainment most often aligned high excitability oscillatory phases with task-relevant events in the input stream and thus resulted in response enhancement. In the remainder of the A1 sites, entrainment generally resulted in response suppression. Our data indicate that the suppressive effects were due to low excitability phase delta oscillatory entrainment and the phase amplitude coupling of delta and gamma oscillations. Regardless of the phase or frequency, entrainment appeared stronger in left A1, indicative of the hemispheric lateralization of auditory function. PMID:26696866

  9. Multi-scale entrainment of coupled neuronal oscillations in primary auditory cortex.

    Directory of Open Access Journals (Sweden)

    Monica Noelle O'Connell

    2015-12-01

    Full Text Available Earlier studies demonstrate that when the frequency of rhythmic tone sequences or streams is task relevant, ongoing excitability fluctuations (oscillations of neuronal ensembles in primary auditory cortex (A1 entrain to stimulation in a frequency dependent way that sharpens frequency tuning. The phase distribution across A1 neuronal ensembles at time points when attended stimuli are predicted to occur reflects the focus of attention along the spectral attribute of auditory stimuli. This study examined how neuronal activity is modulated if only the temporal features of rhythmic stimulus streams are relevant. We presented macaques with auditory clicks arranged in 33 Hz (gamma timescale quintets, repeated at a 1.6 Hz (delta timescale rate. Such multi-scale, hierarchically organized temporal structure is characteristic of vocalizations and other natural stimuli. Monkeys were required to detect and respond to deviations in the temporal pattern of gamma quintets. As expected, engagement in the auditory task resulted in the multi-scale entrainment of delta- and gamma-band neuronal oscillations across all of A1. Surprisingly, however, the phase-alignment, and thus, the physiological impact of entrainment differed across the tonotopic map in A1. In the region of 11-16 kHz representation, entrainment most often aligned high excitability oscillatory phases with task-relevant events in the input stream and thus resulted in response enhancement. In the remainder of the A1 sites, entrainment generally resulted in response suppression. Our data indicate that the suppressive effects were due to low excitability phase delta oscillatory entrainment and the phase amplitude coupling of delta and gamma oscillations. Regardless of the phase or frequency, entrainment appeared stronger in left A1, indicative of the hemispheric lateralization of auditory function.

  10. Overriding auditory attentional capture.

    Science.gov (United States)

    Dalton, Polly; Lavie, Nilli

    2007-02-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587

  11. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution.

    Science.gov (United States)

    Hertz, Uri; Amedi, Amir

    2015-08-01

    The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756

  12. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...

  13. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291

  14. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    ArjenAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  15. Relationship between Critical Flicker Fusion (CFF) Thresholds and Personality under Three Auditory Stimulus Conditions.

    Science.gov (United States)

    Ali, M. R.; Amir, T.

    1988-01-01

    Investigated relationship between critical flicker fusion (CFF) thresholds and five personality characteristics (alienation; social nonconformity; discomfort, expression, and defensiveness) under three auditory stimulus conditions (quiet, noise, meaningful verbal stimuli). Results from 60 college students revealed that auditory stimulation and…

  16. Hypermnesia using auditory input.

    Science.gov (United States)

    Allen, J

    1992-07-01

    The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564

  17. Psychophysical and Neural Correlates of Auditory Attraction and Aversion

    Science.gov (United States)

    Patten, Kristopher Jakob

    This study explores the psychophysical and neural processes associated with the perception of sounds as either pleasant or aversive. The underlying psychophysical theory is based on auditory scene analysis, the process through which listeners parse auditory signals into individual acoustic sources. The first experiment tests and confirms that a self-rated pleasantness continuum reliably exists for 20 various stimuli (r = .48). In addition, the pleasantness continuum correlated with the physical acoustic characteristics of consonance/dissonance (r = .78), which can facilitate auditory parsing processes. The second experiment uses an fMRI block design to test blood oxygen level dependent (BOLD) changes elicited by a subset of 5 exemplar stimuli chosen from Experiment 1 that are evenly distributed over the pleasantness continuum. Specifically, it tests and confirms that the pleasantness continuum produces systematic changes in brain activity for unpleasant acoustic stimuli beyond what occurs with pleasant auditory stimuli. Results revealed that the combination of two positively and two negatively valenced experimental sounds compared to one neutral baseline control elicited BOLD increases in the primary auditory cortex, specifically the bilateral superior temporal gyrus, and left dorsomedial prefrontal cortex; the latter being consistent with a frontal decision-making process common in identification tasks. The negatively-valenced stimuli yielded additional BOLD increases in the left insula, which typically indicates processing of visceral emotions. The positively-valenced stimuli did not yield any significant BOLD activation, consistent with consonant, harmonic stimuli being the prototypical acoustic pattern of auditory objects that is optimal for auditory scene analysis. Both the psychophysical findings of Experiment 1 and the neural processing findings of Experiment 2 support that consonance is an important dimension of sound that is processed in a manner that aids

  18. Auditory Neuropathy

    Science.gov (United States)

    ... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...

  19. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581

  20. A review of the generalization of auditory learning

    OpenAIRE

    Wright, Beverly A.; Zhang, Yuxuan

    2008-01-01

    The ability to detect and discriminate attributes of sounds improves with practice. Determining how such auditory learning generalizes to stimuli and tasks that are not encountered during training can guide the development of training regimens used to improve hearing abilities in particular populations as well as provide insight into the neural mechanisms mediating auditory performance. Here we review the newly emerging literature on the generalization of auditory learning, focusing on behavi...

  1. The Impact of Maternal Smoking on Fast Auditory Brainstem Responses

    OpenAIRE

    Kable, Julie A.; Coles, Claire D.; Lynch, Mary Ellen; Carroll, Julie

    2009-01-01

    Deficits in auditory processing have been posited as one of the underlying neurodevelopmental consequences of maternal smoking during pregnancy that leads to later language and reading deficits. Fast auditory brainstem responses were used to assess differences in the sensory processing of auditory stimuli among infants with varying degrees of prenatal cigarette exposure. Maternal report of consumption of cigarettes and blood samples were collected in the hospital to assess exposure levels and...

  2. Fundamental deficits of auditory perception in Wernicke’s aphasia

    OpenAIRE

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew; Griffiths, Timothy; Sage, Karen

    2012-01-01

    Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated wit...

  3. Individual differences in auditory abilities.

    Science.gov (United States)

    Kidd, Gary R; Watson, Charles S; Gygi, Brian

    2007-07-01

    Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability. PMID:17614500

  4. The Importance of Rapid Auditory Processing Abilities to Early Language Development: Evidence from Converging Methodologies

    OpenAIRE

    Benasich, April A.; Thomas, Jennifer J.; Choudhury, Naseem; Leppänen, Paavo H. T.

    2002-01-01

    The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. Likewise, deficits in rapid auditory processing of both verbal and nonverbal stimuli are characteristic of individuals with developmental language disorders such as Specific Language Impairment. Auditory processing abilities are well developed in infancy, and thus such deficits should be detectable in infants. In the studies presented here, converging met...

  5. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  6. Efficacy of Individual Computer-Based Auditory Training for People with Hearing Loss: A Systematic Review of the Evidence

    OpenAIRE

    Helen Henshaw; Ferguson, Melanie A.

    2013-01-01

    BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing los...

  7. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    BethanyPlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  8. Exploring Auditory Saltation Using the "Reduced-Rabbit" Paradigm

    Science.gov (United States)

    Getzmann, Stephan

    2009-01-01

    Sensory saltation is a spatiotemporal illusion in which the judged positions of stimuli are shifted toward subsequent stimuli that follow closely in time. So far, studies on saltation in the auditory domain have usually employed subjective rating techniques, making it difficult to exactly quantify the extent of saltation. In this study, temporal…

  9. Comparação dos estímulos clique e CE-chirp® no registro do Potencial Evocado Auditivo de Tronco Encefálico Comparison of click and CE-chirp® stimuli on Brainstem Auditory Evoked Potential recording

    Directory of Open Access Journals (Sweden)

    Gabriela Ribeiro Ivo Rodrigues

    2012-12-01

    Full Text Available OBJETIVO: Comparar as latências e as amplitudes da onda V no registro do Potencial Evocado Auditivo de Tronco Encefálico (PEATE com os estímulos clique e CE-chirp® e a presença ou ausência das ondas I, III e V em fortes intensidades. MÉTODOS: Estudo transversal com 12 adultos com limiares audiométricos PURPOSE: To compare the latencies and amplitudes of wave V on the Brainstem Auditory Evoked Potential (BAEP recording obtained with click and CE-chirp® stimuli and the presence or absence of waves I, III and V in high intensities. METHODS: Cross-sectional study with 12 adults with audiometric thresholds <15 dBHL (24 ears and mean age of 27 years. The parameters used for the recording with both stimuli in intensities of 80, 60, 40, 20 dBnHL were alternate polarity and repetition rate of 27.1 Hz. RESULTS: The CE-chirp® latencies for wave V were longer than click latencies at low intensity levels (20 and 40 dBnHL. At high intensity levels (60 and 80 dBnHL, the opposite occurred. Larger wave V amplitudes were observed with CE-chirp® in all intensity levels, except at 80 dBnHL. CONCLUSION: The CE-chirp® showed shorter latencies than those observed with clicks at high intensity levels and larger amplitudes at all intensity levels, except at 80 dBnHL. The waves I and III tended to disappear with CE-chirp® stimulation.

  10. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  11. AUDITORY REACTION TIME IN BASKETBALL PLAYERS AND HEALTHY CONTROLS

    Directory of Open Access Journals (Sweden)

    Ghuntla Tejas P.

    2013-08-01

    Full Text Available Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time measured. During the reaction time testing, auditory stimuli were given for three times and minimum reaction time was taken as the final reaction time for that sensory modality of that subject. The results were statistically analyzed and were recorded as mean + standard deviation and student’s unpaired t-test was applied to check the level of significance. The study shows that basketball players have shorter reaction time than healthy controls. As reaction time gives the information how fast a person gives a response to sensory stimuli, it is a good indicator of performance in reactive sports like basketball. Sportsman should be trained to improve their reaction time to improve their performance.

  12. Contribution of psychoacoustics and neuroaudiology in revealing correlation of mental disorders with central auditory processing disorders

    OpenAIRE

    Iliadou, V; Iakovides, S

    2003-01-01

    Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Desi...

  13. Auditory priming of frequency and temporal information: Effects of lateralized presentation

    OpenAIRE

    List, Alexandra; Justus, Timothy

    2007-01-01

    Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...

  14. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte; Siebner, Hartwig R

    2013-01-01

    and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In...... the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was...

  15. Neuronal activity in primate auditory cortex during the performance of audiovisual tasks.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2015-03-01

    This study aimed at a deeper understanding of which cognitive and motivational aspects of tasks affect auditory cortical activity. To this end we trained two macaque monkeys to perform two different tasks on the same audiovisual stimulus and to do this with two different sizes of water rewards. The monkeys had to touch a bar after a tone had been turned on together with an LED, and to hold the bar until either the tone (auditory task) or the LED (visual task) was turned off. In 399 multiunits recorded from core fields of auditory cortex we confirmed that during task engagement neurons responded to auditory and non-auditory stimuli that were task-relevant, such as light and water. We also confirmed that firing rates slowly increased or decreased for several seconds during various phases of the tasks. Responses to non-auditory stimuli and slow firing changes were observed during both the auditory and the visual task, with some differences between them. There was also a weak task-dependent modulation of the responses to auditory stimuli. In contrast to these cognitive aspects, motivational aspects of the tasks were not reflected in the firing, except during delivery of the water reward. In conclusion, the present study supports our previous proposal that there are two response types in the auditory cortex that represent the timing and the type of auditory and non-auditory elements of a auditory tasks as well the association between elements. PMID:25728179

  16. In search of an auditory engram

    Science.gov (United States)

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that monkeys may be unable to place representations of auditory stimuli into a long-term store and thus question whether the monkey's cerebral memory mechanisms in audition are intrinsically different from those in other sensory modalities. Furthermore, it raises the possibility that language is unique to humans not only because it depends on speech but also because it requires long-term auditory memory. PMID:15967995

  17. McGurk illusion recalibrates subsequent auditory perception.

    Science.gov (United States)

    Lüttke, Claudia S; Ekman, Matthias; van Gerven, Marcel A J; de Lange, Floris P

    2016-01-01

    Visual information can alter auditory perception. This is clearly illustrated by the well-known McGurk illusion, where an auditory/aba/ and a visual /aga/ are merged to the percept of 'ada'. It is less clear however whether such a change in perception may recalibrate subsequent perception. Here we asked whether the altered auditory perception due to the McGurk illusion affects subsequent auditory perception, i.e. whether this process of fusion may cause a recalibration of the auditory boundaries between phonemes. Participants categorized auditory and audiovisual speech stimuli as /aba/, /ada/ or /aga/ while activity patterns in their auditory cortices were recorded using fMRI. Interestingly, following a McGurk illusion, an auditory /aba/ was more often misperceived as 'ada'. Furthermore, we observed a neural counterpart of this recalibration in the early auditory cortex. When the auditory input /aba/ was perceived as 'ada', activity patterns bore stronger resemblance to activity patterns elicited by /ada/ sounds than when they were correctly perceived as /aba/. Our results suggest that upon experiencing the McGurk illusion, the brain shifts the neural representation of an /aba/ sound towards /ada/, culminating in a recalibration in perception of subsequent auditory input. PMID:27611960

  18. Infants' Preferential Attention to Sung and Spoken Stimuli

    Science.gov (United States)

    Costa-Giomi, Eugenia; Ilari, Beatriz

    2014-01-01

    Caregivers and early childhood teachers all over the world use singing and speech to elicit and maintain infants' attention. Research comparing infants' preferential attention to music and speech is inconclusive regarding their responses to these two types of auditory stimuli, with one study showing a music bias and another one…

  19. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  20. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  1. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-01

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells. PMID:26963902

  2. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    Science.gov (United States)

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems. PMID:27421976

  3. Hierarchical systems

    NARCIS (Netherlands)

    Hamers, A.S.

    2016-01-01

    The thesis addresses the long-term dynamical evolution of hierarchical multiple systems. First, we consider the evolution of orbits of stars orbiting a supermassive black hole (SBH). We study the long-term evolution and compute tidal disruption rates of stars by the SBH. Such disruption events revea

  4. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  5. Processing of harmonics in the lateral belt of macaque auditory cortex.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer; Rauschecker, Josef P

    2014-01-01

    Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0) and higher harmonics. In this study we examined single-unit activity recorded in the core (A1) and lateral belt (LB) areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations ("coos"). The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF) were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB. PMID:25100935

  6. Electrophysiological correlates of auditory change detection and change deafness in complex auditory scenes.

    Science.gov (United States)

    Puschmann, Sebastian; Sandmann, Pascale; Ahrens, Janina; Thorne, Jeremy; Weerda, Riklef; Klump, Georg; Debener, Stefan; Thiel, Christiane M

    2013-07-15

    Change deafness describes the failure to perceive even intense changes within complex auditory input, if the listener does not attend to the changing sound. Remarkably, previous psychophysical data provide evidence that this effect occurs independently of successful stimulus encoding, indicating that undetected changes are processed to some extent in auditory cortex. Here we investigated cortical representations of detected and undetected auditory changes using electroencephalographic (EEG) recordings and a change deafness paradigm. We applied a one-shot change detection task, in which participants listened successively to three complex auditory scenes, each of them consisting of six simultaneously presented auditory streams. Listeners had to decide whether all scenes were identical or whether the pitch of one stream was changed between the last two presentations. Our data show significantly increased middle-latency Nb responses for both detected and undetected changes as compared to no-change trials. In contrast, only successfully detected changes were associated with a later mismatch response in auditory cortex, followed by increased N2, P3a and P3b responses, originating from hierarchically higher non-sensory brain regions. These results strengthen the view that undetected changes are successfully encoded at sensory level in auditory cortex, but fail to trigger later change-related cortical responses that lead to conscious perception of change. PMID:23466938

  7. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910

  8. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    Science.gov (United States)

    Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.

  9. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression

    Science.gov (United States)

    Atchley, Ruth Ann; Chrysikou, Evangelia; Martin, Laura E.; Clair, Alicia A.; Ingram, Rick E.; Simmons, W. Kyle; Savage, Cary R.

    2016-01-01

    Background Anterior cingulate cortex (ACC) and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD). Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI) and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression. Method Nineteen MDD and 20 never-depressed (ND) control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning. Results ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum. Conclusions These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments. PMID:27284693

  10. Music perception: information flow within the human auditory cortices.

    Science.gov (United States)

    Angulo-Perkins, Arafat; Concha, Luis

    2014-01-01

    Information processing of all acoustic stimuli involves temporal lobe regions referred to as auditory cortices, which receive direct afferents from the auditory thalamus. However, the perception of music (as well as speech or spoken language) is a complex process that also involves secondary and association cortices that conform a large functional network. Using different analytical techniques and stimulation paradigms, several studies have shown that certain areas are particularly sensitive to specific acoustic characteristics inherent to music (e.g., rhythm). This chapter reviews the functional anatomy of the auditory cortices, and highlights specific experiments that suggest the existence of distinct cortical networks for the perception of music and speech. PMID:25358716

  11. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  12. Formation of associations in auditory cortex by slow changes of tonic firing.

    Science.gov (United States)

    Brosch, Michael; Selezneva, Elena; Scheich, Henning

    2011-01-01

    We review event-related slow firing changes in the auditory cortex and related brain structures. Two types of changes can be distinguished, namely increases and decreases of firing, lasting in the order of seconds. Triggering events can be auditory stimuli, reinforcers, and behavioral responses. Slow firing changes terminate with reinforcers and possibly with auditory stimuli and behavioral responses. A necessary condition for the emergence of slow firing changes seems to be that subjects have learnt that consecutive sensory or behavioral events are contingent on reinforcement. They disappear when the contingencies are no longer present. Slow firing changes in auditory cortex bear similarities with slow changes of neuronal activity that have been observed in subcortical parts of the auditory system and in other non-sensory brain structures. We propose that slow firing changes in auditory cortex provide a neuronal mechanism for anticipating, memorizing, and associating events that are related to hearing and of behavioral relevance. This may complement the representation of the timing and types of auditory and auditory-related events which may be provided by phasic responses in auditory cortex. The presence of slow firing changes indicates that many more auditory-related aspects of a behavioral procedure are reflected in the neuronal activity of auditory cortex than previously assumed. PMID:20488230

  13. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  14. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  15. Comparison of Auditory Evoked Potentials in Heterosexual, Homosexual, and Bisexual Males and Females

    OpenAIRE

    McFadden, Dennis; Champlin, Craig A.

    2000-01-01

    The auditory evoked potentials (AEPs) elicited by click stimuli were measured in heterosexual, homosexual, and bisexual males and females having normal hearing sensitivity. Estimates of latency and/or amplitude were extracted for nine peaks having latencies of about 2–240 ms, which are presumed to correspond to populations of neurons located from the auditory nerve through auditory cortex. For five of the 19 measures obtained, the mean latency or amplitude for the 57 homosexual and bisexual f...

  16. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification

    OpenAIRE

    Lidestam, Björn; Moradi, Shahram; Pettersson, Rasmus; Ricklefs, Theodor

    2014-01-01

    The effects of audiovisual versus auditory training for speech-in-noise identification were examined in 60 young participants. The training conditions were audiovisual training, auditory-only training, and no training (n = 20 each). In the training groups, gated consonants and words were presented at 0 dB signal-to-noise ratio; stimuli were either audiovisual or auditory-only. The no-training group watched a movie clip without performing a speech identification task. Speech-in-noise identific...

  17. Frequency-specific disruptions of neuronal oscillations reveal aberrant auditory processing in schizophrenia.

    Science.gov (United States)

    Hayrynen, Lauren K; Hamm, Jordan P; Sponheim, Scott R; Clementz, Brett A

    2016-06-01

    Individuals with schizophrenia exhibit abnormalities in evoked brain responses in oddball paradigms. These could result from (a) insufficient salience-related cortical signaling (P300), (b) insufficient suppression of irrelevant aspects of the auditory environment, or (c) excessive neural noise. We tested whether disruption of ongoing auditory steady-state responses at predetermined frequencies informed which of these issues contribute to auditory stimulus relevance processing abnormalities in schizophrenia. Magnetoencephalography data were collected for 15 schizophrenia and 15 healthy subjects during an auditory oddball paradigm (25% targets; 1-s interstimulus interval). Auditory stimuli (pure tones: 1 kHz standards, 2 kHz targets) were administered during four continuous background (auditory steady-state) stimulation conditions: (1) no stimulation, (2) 24 Hz, (3) 40 Hz, and (4) 88 Hz. The modulation of the auditory steady-state response (aSSR) and the evoked responses to the transient stimuli were quantified and compared across groups. In comparison to healthy participants, the schizophrenia group showed greater disruption of the ongoing aSSR by targets regardless of steady-state frequency, and reduced amplitude of both M100 and M300 event-related field components. During the no-stimulation condition, schizophrenia patients showed accentuation of left hemisphere 40 Hz response to both standard and target stimuli, indicating an effort to enhance local stimulus processing. Together, these findings suggest abnormalities in auditory stimulus relevance processing in schizophrenia patients stem from insufficient amplification of salient stimuli. PMID:26933842

  18. Covert Auditory Spatial Orienting: An Evaluation of the Spatial Relevance Hypothesis

    Science.gov (United States)

    Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.

    2009-01-01

    The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…

  19. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  20. The role of modality : Auditory and visual distractors in Stroop interference

    NARCIS (Netherlands)

    Elliott, Emily M.; Morey, Candice C.; Morey, Richard D.; Eaves, Sharon D.; Shelton, Jill Talley; Lutfi-Proctor, Danielle A.

    2014-01-01

    As a commonly used measure of selective attention, it is important to understand the factors contributing to interference in the Stroop task. The current research examined distracting stimuli in the auditory and visual modalities to determine whether the use of auditory distractors would create addi

  1. Attention samples stimuli rhythmically.

    Science.gov (United States)

    Landau, Ayelet Nina; Fries, Pascal

    2012-06-01

    Overt exploration or sampling behaviors, such as whisking, sniffing, and saccadic eye movements, are often characterized by a rhythm. In addition, the electrophysiologically recorded theta or alpha phase predicts global detection performance. These two observations raise the intriguing possibility that covert selective attention samples from multiple stimuli rhythmically. To investigate this possibility, we measured change detection performance on two simultaneously presented stimuli, after resetting attention to one of them. After a reset flash at one stimulus location, detection performance fluctuated rhythmically. When the flash was presented in the right visual field, a 4 Hz rhythm was directly visible in the time courses of behavioral performance at both stimulus locations, and the two rhythms were in antiphase. A left visual field flash exerted only partial reset on performance and induced rhythmic fluctuation at higher frequencies (6-10 Hz). These findings show that selective attention samples multiple stimuli rhythmically, and they position spatial attention within the family of exploration behaviors. PMID:22633805

  2. Efficacy of auditory training in elderly subjects

    Directory of Open Access Journals (Sweden)

    Aline Albuquerque Morais

    2015-05-01

    Full Text Available Auditory training (AT  has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test-retest effects. Thus, we investigated the efficacy of short-term auditory training (acoustically controlled auditory training - ACAT in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with APD received an initial evaluation (evaluation 1 - E1 consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group [ACG (n=8] underwent placebo training. The Passive Control Group [PCG (n=8] did not receive any intervention. After 12 weeks, the subjects were  revaluated (evaluation 2 - E2. Then, all of the subjects underwent ACAT. Following another 12 weeks (8 training sessions, they underwent the final evaluation (evaluation 3 – E3. There was no significant difference between E1 and E2 in the behavioral test [F(9.6=0,.6 p=0.92, λ de Wilks=0.65] or P300 [F(8.7=2.11, p=0.17, λ de Wilks=0.29] (discarding the presence of placebo effects and test-retest. A significant improvement was observed between the pre- and post-ACAT conditions (E2 and E3 for all auditory skills according to the behavioral methods [F(4.27=0.18, p=0.94, λ de Wilks=0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.

  3. Enhanced representation of spectral contrasts in the primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Nicolas eCatz

    2013-06-01

    Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.

  4. Neuromagnetic evidence for early auditory restoration of fundamental pitch.

    Directory of Open Access Journals (Sweden)

    Philip J Monahan

    Full Text Available BACKGROUND: Understanding the time course of how listeners reconstruct a missing fundamental component in an auditory stimulus remains elusive. We report MEG evidence that the missing fundamental component of a complex auditory stimulus is recovered in auditory cortex within 100 ms post stimulus onset. METHODOLOGY: Two outside tones of four-tone complex stimuli were held constant (1200 Hz and 2400 Hz, while two inside tones were systematically modulated (between 1300 Hz and 2300 Hz, such that the restored fundamental (also knows as "virtual pitch" changed from 100 Hz to 600 Hz. Constructing the auditory stimuli in this manner controls for a number of spectral properties known to modulate the neuromagnetic signal. The tone complex stimuli only diverged on the value of the missing fundamental component. PRINCIPAL FINDINGS: We compared the M100 latencies of these tone complexes to the M100 latencies elicited by their respective pure tone (spectral pitch counterparts. The M100 latencies for the tone complexes matched their pure sinusoid counterparts, while also replicating the M100 temporal latency response curve found in previous studies. CONCLUSIONS: Our findings suggest that listeners are reconstructing the inferred pitch by roughly 100 ms after stimulus onset and are consistent with previous electrophysiological research suggesting that the inferential pitch is perceived in early auditory cortex.

  5. Electrophysiological Responses to Auditory Novelty in Temperamentally Different 9-Month-Old Infants

    Science.gov (United States)

    Marshall, Peter J.; Reeb, Bethany C.; Fox, Nathan A.

    2009-01-01

    Behavioral reactivity to novel stimuli in the first half-year of life has been identified as a key aspect of early temperament and a significant precursor of approach and withdrawal tendencies to novelty in later infancy and early childhood. The current study examines the neural signatures of reactivity to novel auditory stimuli in 9-month-old…

  6. Increased Auditory Startle Reflex in Children with Functional Abdominal Pain

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.

    2010-01-01

    Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal pain

  7. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  8. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  9. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway. PMID:24672012

  10. Extra-classical tuning predicts stimulus-dependent receptive fields in auditory neurons

    OpenAIRE

    Schneider, David M.; Woolley, Sarah M. N.

    2011-01-01

    The receptive fields of many sensory neurons are sensitive to statistical differences among classes of complex stimuli. For example, excitatory spectral bandwidths of midbrain auditory neurons and the spatial extent of cortical visual neurons differ during the processing of natural stimuli compared to the processing of artificial stimuli. Experimentally characterizing neuronal non-linearities that contribute to stimulus-dependent receptive fields is important for understanding how neurons res...

  11. The perception of coherent and non-coherent auditory objects: a signature in gamma frequency band.

    Science.gov (United States)

    Knief, A; Schulte, M; Bertran, O; Pantev, C

    2000-07-01

    The pertinence of gamma band activity in magnetoencephalographic and electroencephalographic recordings for the performance of a gestalt recognition process is a question at issue. We investigated the functional relevance of gamma band activity for the perception of auditory objects. An auditory experiment was performed as an analog to the Kanizsa experiment in the visual modality, comprising four different coherent and non-coherent stimuli. For the first time functional differences of evoked gamma band activity due to the perception of these stimuli were demonstrated by various methods (localization of sources, wavelet analysis and independent component analysis, ICA). Responses to coherent stimuli were found to have more features in common compared to non-coherent stimuli (e.g. closer located sources and smaller number of ICA components). The results point to the existence of a pitch processor in the auditory pathway. PMID:10867289

  12. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  13. Prefrontal activity predicts monkeys' decisions during an auditory category task

    Directory of Open Access Journals (Sweden)

    Jung Hoon Lee

    2009-06-01

    Full Text Available The neural correlates that relate auditory categorization to aspects of goal-directed behavior, such as decision-making, are not well understood. Since the prefrontal cortex plays an important role in executive function and the categorization of auditory objects, we hypothesized that neural activity in the prefrontal cortex (PFC should predict an animal's behavioral reports (decisions during a category task. To test this hypothesis, we tested PFC activity that was recorded while monkeys categorized human spoken words (Russ et al., 2008b. We found that activity in the ventrolateral PFC, on average, correlated best with the monkeys' choices than with the auditory stimuli. This finding demonstrates a direct link between PFC activity and behavioral choices during a non-spatial auditory task.

  14. The Analysis of Sensory Stimuli of Terror in A Rose for Emily

    Institute of Scientific and Technical Information of China (English)

    尚慧敏

    2015-01-01

    William Faulkner drew impressive pictures of terror in A Rose for Emily by visual descriptions, auditory descriptions, tactile descriptions, and olfactory descriptions. Through the biological analysis, people can figure out what kinds of stimuli in this work can produce terror, how the sensory organs respond to the terror stimuli and why readers fear them. It is proved that Faulkner’s description of terror is based on the system of men’s receiving information and the production mechanism of terror.

  15. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    OpenAIRE

    David B. Stone; Coffman, Brian A; Juan Bustillo; Cheryl Aine

    2014-01-01

    Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and heal...

  16. Effect of stimuli, transducers and gender on acoustic change complex

    Directory of Open Access Journals (Sweden)

    Hemanth N. Shetty

    2012-08-01

    Full Text Available The objective of this study was to investigate the effect of stimuli, transducers and gender on the latency and amplitude of acoustic change complex (ACC. ACC is a multiple overlapping P1-N1-P2 complex reflecting acoustic changes across the entire stimulus. Fifteen males and 15 females, in the age range of 18 to 25 (mean=21.67 years, having normal hearing participated in the study. The ACC was recorded using the vertical montage. The naturally produced stimuli /sa/ and /si/ were presented through the insert earphone/loud speaker to record the ACC. The ACC obtained from different stimuli presented through different transducers from male/female participants were analyzed using mixed analysis of variance. Dependent t-test and independent t-test were performed when indicated. There was a significant difference in latency of 2N1 at the transition, with latency for /sa/ being earlier; but not at the onset portions of ACC. There was no significant difference in amplitude of ACC between the stimuli. Among the transducers, there was no significant difference in latency and amplitude of ACC, for both /sa/ and /si/ stimuli. Female participants showed earlier latency for 2N1 and larger amplitude of N1 and 2P2 than male participants, which was significant. ACC provides important insight in detecting the subtle spectral changes in each stimulus. Among the transducers, no difference in ACC was noted as the spectra of stimuli delivered were within the frequency response of the transducers. The earlier 2N1 latency and larger N1 and 2P2 amplitudes noticed in female participants could be due to smaller head circumference. The findings of this study will be useful in determining the capacity of the auditory pathway in detecting subtle spectral changes in the stimulus at the level of the auditory cortex.

  17. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  18. Computational characterization of visually-induced auditory spatial adaptation

    Directory of Open Access Journals (Sweden)

    David R Wozny

    2011-11-01

    Full Text Available Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory-visual spatial conflict, known as the Ventriloquist Aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution, a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution, a shift in the auditory bias (i.e., shift in prior distribution, or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution, or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory-visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.

  19. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  20. Neurophysiological Mechanisms of Auditory Information Processing in Adolescence: A Study on Sex Differences.

    Science.gov (United States)

    Bakos, Sarolta; Töllner, Thomas; Trinkl, Monika; Landes, Iris; Bartling, Jürgen; Grossheinrich, Nicola; Schulte-Körne, Gerd; Greimel, Ellen

    2016-04-01

    To date, little is known about sex differences in the neurophysiological correlates underlying auditory information processing. In the present study, auditory evoked potentials were evoked in typically developing male (n = 15) and female (n = 14) adolescents (13-18 years) during an auditory oddball task. Girls compared to boys displayed lower N100 and P300 amplitudes to targets. Larger N100 amplitudes in adolescent boys might indicate higher neural sensitivity to changes of incoming auditory information. The P300 findings point toward sex differences in auditory working memory and might suggest that adolescent boys might allocate more attentional resources when processing relevant auditory stimuli than adolescent girls. PMID:27379950

  1. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  2. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265

  3. Neuronal activity in primate prefrontal cortex related to goal-directed behavior during auditory working memory tasks.

    Science.gov (United States)

    Huang, Ying; Brosch, Michael

    2016-06-01

    Prefrontal cortex (PFC) has been documented to play critical roles in goal-directed behaviors, like representing goal-relevant events and working memory (WM). However, neurophysiological evidence for such roles of PFC has been obtained mainly with visual tasks but rarely with auditory tasks. In the present study, we tested roles of PFC in auditory goal-directed behaviors by recording local field potentials in the auditory region of left ventrolateral PFC while a monkey performed auditory WM tasks. The tasks consisted of multiple events and required the monkey to change its mental states to achieve the reward. The events were auditory and visual stimuli, as well as specific actions. Mental states were engaging in the tasks and holding task-relevant information in auditory WM. We found that, although based on recordings from one hemisphere in one monkey only, PFC represented multiple events that were important for achieving reward, including auditory and visual stimuli like turning on and off an LED, as well as bar touch. The responses to auditory events depended on the tasks and on the context of the tasks. This provides support for the idea that neuronal representations in PFC are flexible and can be related to the behavioral meaning of stimuli. We also found that engaging in the tasks and holding information in auditory WM were associated with persistent changes of slow potentials, both of which are essential for auditory goal-directed behaviors. Our study, on a single hemisphere in a single monkey, reveals roles of PFC in auditory goal-directed behaviors similar to those in visual goal-directed behaviors, suggesting that functions of PFC in goal-directed behaviors are probably common across the auditory and visual modality. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26874071

  4. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  5. Processing of harmonics in the lateral belt of macaque auditory cortex

    Directory of Open Access Journals (Sweden)

    YukikoKikuchi

    2014-07-01

    Full Text Available Many speech sounds and animal vocalizations contain components, referred to as complex tones, that consist of a fundamental frequency (F0 and higher harmonics. In this study we examined single-unit activity recorded in the core (A1 and lateral belt (LB areas of auditory cortex in two rhesus monkeys as they listened to pure tones and pitch-shifted conspecific vocalizations (‘coos’. The latter consisted of complex-tone segments in which F0 was matched to a corresponding pure-tone stimulus. In both animals, neuronal latencies to pure-tone stimuli at the best frequency (BF were ~10 to 15 ms longer in LB than in A1. This might be expected, since LB is considered to be at a hierarchically higher level than A1. On the other hand, the latency of LB responses to coos was ~10 to 20 ms shorter than to the corresponding pure-tone BF, suggesting facilitation in LB by the harmonics. This latency reduction by coos was not observed in A1, resulting in similar coo latencies in A1 and LB. Multi-peaked neurons were present in both A1 and LB; however, harmonically-related peaks were observed in LB for both early and late response components, whereas in A1 they were observed only for late components. Our results suggest that harmonic features, such as relationships between specific frequency intervals of communication calls, are processed at relatively early stages of the auditory cortical pathway, but preferentially in LB.

  6. Neutral versus Emotional Human Stimuli Processing in Children with Pervasive Developmental Disorders not Otherwise Specified

    Science.gov (United States)

    Vannetzel, Leonard; Chaby, Laurence; Cautru, Fabienne; Cohen, David; Plaza, Monique

    2011-01-01

    Pervasive developmental disorder not otherwise specified (PDD-NOS) represents up to two-thirds of autism spectrum disorders; however, it is usually described in terms of the symptoms not shared by autism. The study explores processing of neutral and emotional human stimuli (by auditory, visual and multimodal channels) in children with PDD-NOS (n =…

  7. Moving Objects in the Barn Owl's Auditory World.

    Science.gov (United States)

    Langemann, Ulrike; Krumm, Bianca; Liebner, Katharina; Beutelmann, Rainer; Klump, Georg M

    2016-01-01

    Barn owls are keen hunters of moving prey. They have evolved an auditory system with impressive anatomical and physiological specializations for localizing their prey. Here we present behavioural data on the owl's sensitivity for discriminating acoustic motion direction in azimuth that, for the first time, allow a direct comparison of neuronal and perceptual sensitivity for acoustic motion in the same model species. We trained two birds to report a change in motion direction within a series of repeating wideband noise stimuli. For any trial the starting point, motion direction, velocity (53-2400°/s), duration (30-225 ms) and angular range (12-72°) of the noise sweeps were randomized. Each test stimulus had a motion direction being opposite to that of the reference stimuli. Stimuli were presented in the frontal or the lateral auditory space. The angular extent of the motion had a large effect on the owl's discrimination sensitivity allowing a better discrimination for a larger angular range of the motion. In contrast, stimulus velocity or stimulus duration had a smaller, although significant effect. Overall there was no difference in the owls' behavioural performance between "inward" noise sweeps (moving from lateral to frontal) compared to "outward" noise sweeps (moving from frontal to lateral). The owls did, however, respond more often to stimuli with changing motion direction in the frontal compared to the lateral space. The results of the behavioural experiments are discussed in relation to the neuronal representation of motion cues in the barn owl auditory midbrain. PMID:27080662

  8. Cholinergic modulation of auditory steady-state response in the auditory cortex of the freely moving rat.

    Science.gov (United States)

    Zhang, J; Ma, L; Li, W; Yang, P; Qin, L

    2016-06-01

    As disturbance in auditory steady-state response (ASSR) has been consistently found in many neuropsychiatric disorders, such as autism spectrum disorder and schizophrenia, there is considerable interest in the development of translational rat models to elucidate the underlying neural and neurochemical mechanisms involved in ASSR. This is the first study to investigate the effects of the non-selective muscarinic antagonist scopolamine and the cholinesterase inhibitor donepezil (also in combination with scopolamine) on ASSR. We recorded the local field potentials through the chronic microelectrodes implanted in the auditory cortex of freely moving rat. ASSRs were recorded in response to auditory stimuli delivered over a range of frequencies (10-80Hz) and averaged over 60 trials. We found that a single dose of scopolamine produced a temporal attenuation in response to auditory stimuli; the most attenuation occurred at 40Hz. Time-frequency analysis revealed deficits in both power and phase-locking to 40Hz. Donepezil augmented 40-Hz steady-state power and phase-locking. Scopolamine combined with donepezil had an enhanced effect on the phase-locking, but not power of ASSR. These changes induced by cholinergic drugs suggest an involvement of muscarinic neurotransmission in auditory processing and provide a rodent model investigating the neurochemical mechanism of neurophysiological deficits seen in patients. PMID:26964684

  9. The Essential Complexity of Auditory Receptive Fields.

    Science.gov (United States)

    Thorson, Ivar L; Liénard, Jean; David, Stephen V

    2015-12-01

    Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490

  10. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  11. The processing of visual and auditory information for reaching movements.

    Science.gov (United States)

    Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc

    2016-09-01

    Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323

  12. Spatial audition in a static virtual environment: the role of auditory-visual interaction

    Directory of Open Access Journals (Sweden)

    Isabelle Viaud-Delmon

    2009-04-01

    Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

  13. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  14. A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images : Auditory Classification Images

    OpenAIRE

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique t...

  15. Neural masking by sub-threshold electric stimuli: animal and computer model results.

    Science.gov (United States)

    Miller, Charles A; Woo, Jihwan; Abbas, Paul J; Hu, Ning; Robinson, Barbara K

    2011-04-01

    Electric stimuli can prosthetically excite auditory nerve fibers to partially restore sensory function to individuals impaired by profound or severe hearing loss. While basic response properties of electrically stimulated auditory nerve fibers (ANF) are known, responses to complex, time-changing stimuli used clinically are inadequately understood. We report that forward-masker pulse trains can enhance and reduce ANF responsiveness to subsequent stimuli and the novel observation that sub-threshold (nonspike-evoking) electric trains can reduce responsiveness to subsequent pulse-train stimuli. The effect is observed in the responses of cat ANFs and shown by a computational biophysical ANF model that simulates rate adaptation through integration of external potassium cation (K) channels. Both low-threshold (i.e., Klt) and high-threshold (Kht) channels were simulated at each node of Ranvier. Model versions without Klt channels did not produce the sub-threshold effect. These results suggest that some such accumulation mechanism, along with Klt channels, may underlie sub-threshold masking observed in cat ANF responses. As multichannel auditory prostheses typically present sub-threshold stimuli to various ANF subsets, there is clear relevance of these findings to clinical situations. PMID:21080206

  16. Auditory processing deficits among language-learning disordered children and adults

    Science.gov (United States)

    Wayland, Ratree; Lombardino, Linda

    2003-10-01

    It has been estimated that approximately 5%-9% of school-aged children in the United States are diagnosed with some kind of learning disorders. Moreover, previous research has established that many of these children exhibited perceptual deficits in response to auditory stimuli, suggesting that an auditory perceptual deficit may underlie their learning disabilities. The goal of this research is to examine the ability to auditorily process speech and nonspeech stimuli among language-learning disabled (LLD) children and adults. The two questions that will be addressed in this study are: (a) Are there subtypes of LLD children/adults based on their auditory processing deficit, and (b) Is there any relationship between types of auditory processing deficits and types of language deficits as measured by a battery of psychoeducational tests.

  17. Different patterns of auditory cortex activation revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)

  18. An auditory-periphery model of the effects of acoustic trauma on auditory nerve responses

    Science.gov (United States)

    Bruce, Ian C.; Sachs, Murray B.; Young, Eric D.

    2003-01-01

    Acoustic trauma degrades the auditory nerve's tonotopic representation of acoustic stimuli. Recent physiological studies have quantified the degradation in responses to the vowel eh and have investigated amplification schemes designed to restore a more correct tonotopic representation than is achieved with conventional hearing aids. However, it is difficult from the data to quantify how much different aspects of the cochlear pathology contribute to the impaired responses. Furthermore, extensive experimental testing of potential hearing aids is infeasible. Here, both of these concerns are addressed by developing models of the normal and impaired auditory peripheries that are tested against a wide range of physiological data. The effects of both outer and inner hair cell status on model predictions of the vowel data were investigated. The modeling results indicate that impairment of both outer and inner hair cells contribute to degradation in the tonotopic representation of the formant frequencies in the auditory nerve. Additionally, the model is able to predict the effects of frequency-shaping amplification on auditory nerve responses, indicating the model's potential suitability for more rapid development and testing of hearing aid schemes.

  19. Learning effects of piano playing on tactile recognition of sequential stimuli.

    Science.gov (United States)

    Hatta, T; Ejiri, A

    1989-01-01

    To examine the effect of learning experiences of piano playing on a tactile sequential recognition task, two experiments were conducted. In the first experiment, pianists and control subjects were given sequential tactile stimuli and were asked to report the simulated fingers and the order. The pianists showed a left hand superiority and performed better than the control group. In the second experiment, the skilled pianists and the control subjects were given both sequential tactile stimuli and auditory stimuli (unrelated melodies) simultaneously. The sequential stimuli recognition of the skilled pianists was interfered with by the presentation of the unrelated melody, and this tendency was more prominent in their left hand, while the performance of the control subjects was not affected by the presentation of the melody. These results suggest that pianists employed a special strategy, such as transforming tactile stimuli into something like a melody to improve their performance. Based upon these results, effects of learning experiences on hemisphere function were discussed. PMID:2615935

  20. Magnitude judgments of loudness change for discrete, dynamic, and hybrid stimuli.

    Science.gov (United States)

    Pastore, Richard E; Flint, Jesse

    2011-04-01

    Recent investigations of loudness change within stimuli have identified differences as a function of direction of change and power range (e.g., Canévet, Acustica, 62, 2136-2142, 1986; Neuhoff, Nature, 395, 123-124, 1998), with claims of differences between dynamic and static stimuli. Experiment 1 provides the needed direct empirical evaluation of loudness change across static, dynamic, and hybrid stimuli. Consistent with recent findings for dynamic stimuli, quantitative and qualitative differences in pattern of loudness change were found as a function of power change direction. With identical patterns of loudness change, only quantitative differences were found across stimulus type. In Experiment 2, Points of Subjective loudness Equality (PSE) provided additional information about loudness judgments for the static and dynamic stimuli. Because the quantitative differences across stimulus type exceed the magnitude that could be expected based upon temporal integration by the auditory system, other factors need to be, and are, considered. PMID:21264709

  1. Overriding auditory attentional capture

    OpenAIRE

    Dalton, Polly; Lavie, Nilli

    2007-01-01

    Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when ...

  2. [Central auditory prosthesis].

    Science.gov (United States)

    Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M

    2009-06-01

    Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084

  3. Sensory Responses during Sleep in Primate Primary and Secondary Auditory Cortex

    OpenAIRE

    Issa, Elias B.; Wang, Xiaoqin

    2008-01-01

    Most sensory stimuli do not reach conscious perception during sleep. It has been thought that the thalamus prevents the relay of sensory information to cortex during sleep, but the consequences for cortical responses to sensory signals in this physiological state remain unclear. We recorded from two auditory cortical areas downstream of the thalamus in naturally sleeping marmoset monkeys. Single neurons in primary auditory cortex either increased or decreased their responses during sleep comp...

  4. Experience-based auditory predictions modulate brain activity to silence as do real sounds

    OpenAIRE

    Chouiter, Leila; Tzovara, Athina; Dieguez, Sebastian; Annoni, Jean-Marie; Magezi, David; De Lucia, Marzia; Spierer, Lucas

    2016-01-01

    Interactions between stimuli's acoustic features and experience-based internal models of the environment enable listeners to compensate for the disruptions in auditory streams that are regularly encountered in noisy environments. However, whether auditory gaps are filled in predictively or restored a posteriori remains unclear. The current lack of positive statistical evidence that internal models can actually shape brain activity as would real sounds precludes accepting predictive accou...

  5. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    OpenAIRE

    Pegah Afra; Michael Funke; Fumisuke Matsuo

    2009-01-01

    Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as wel...

  6. Auditory priming effects on the production of second language speech sounds

    OpenAIRE

    Leong, Lindsay Ann

    2013-01-01

    Research shows that speech perception and production are connected, however, the extent to which auditory speech stimuli can affect second language production has been less thoroughly explored. The current study presents Mandarin learners of English with an English vowel as an auditory prime (/i/, /ɪ/, /u/, /ʊ/) followed by an English target word containing either a tensity congruent (e.g. prime: /i/ - target: “peach”) or incongruent (e.g. prime: /i/ - target: “pitch”) vowel. Pronunciation of...

  7. BALDEY: A database of auditory lexical decisions.

    Science.gov (United States)

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses. PMID:25397865

  8. Auditory Discrimination Learning: Role of Working Memory.

    Science.gov (United States)

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience. PMID:26799068

  9. The early component of middle latency auditory-evoked potentials in the process of deviance detection.

    Science.gov (United States)

    Li, Linfeng; Gong, Qin

    2016-07-01

    The aim of the present study was to investigate both the encoding mechanism and the process of deviance detection when deviant stimuli were presented in various patterns in an environment featuring repetitive sounds. In adults with normal hearing, middle latency responses were recorded within an oddball paradigm containing complex tones or speech sounds, wherein deviant stimuli featured different change patterns. For both complex tones and speech sounds, the Na and Pa components of middle latency responses showed an increase in the mean amplitude and a reduction in latency when comparing rare deviant stimuli with repetitive standard stimuli in a stimulation block. However, deviant stimuli with a rising frequency induced signals with smaller amplitudes than other deviant stimuli. The present findings indicate that deviant stimuli with different change patterns induce differing responses in the primary auditory cortex. In addition, the Pa components of speech sounds typically feature a longer latency and similar mean amplitude compared with complex tones, which suggests that the auditory system requires more complex processing for the analysis of speech sounds before processing in the auditory cortex. PMID:27203294

  10. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing

    Science.gov (United States)

    Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812

  11. Variability and information content in auditory cortex spike trains during an interval-discrimination task.

    Science.gov (United States)

    Abolafia, Juan M; Martinez-Garcia, M; Deco, G; Sanchez-Vives, M V

    2013-11-01

    Processing of temporal information is key in auditory processing. In this study, we recorded single-unit activity from rat auditory cortex while they performed an interval-discrimination task. The animals had to decide whether two auditory stimuli were separated by either 150 or 300 ms and nose-poke to the left or to the right accordingly. The spike firing of single neurons in the auditory cortex was then compared in engaged vs. idle brain states. We found that spike firing variability measured with the Fano factor was markedly reduced, not only during stimulation, but also in between stimuli in engaged trials. We next explored if this decrease in variability was associated with an increased information encoding. Our information theory analysis revealed increased information content in auditory responses during engagement compared with idle states, in particular in the responses to task-relevant stimuli. Altogether, we demonstrate that task-engagement significantly modulates coding properties of auditory cortical neurons during an interval-discrimination task. PMID:23945780

  12. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  13. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  14. Stimulator with arbitrary waveform for auditory evoked potentials

    Energy Technology Data Exchange (ETDEWEB)

    Martins, H R; Romao, M; Placido, D; Provenzano, F; Tierra-Criollo, C J [Universidade Federal de Minas Gerais (UFMG), Departamento de Engenharia Eletrica (DEE), Nucleo de Estudos e Pesquisa em Engenharia Biomedica NEPEB, Av. Ant. Carlos, 6627, sala 2206, Pampulha, Belo Horizonte, MG, 31.270-901 (Brazil)

    2007-11-15

    The technological improvement helps many medical areas. The audiometric exams involving the auditory evoked potentials can make better diagnoses of auditory disorders. This paper proposes the development of a stimulator based on Digital Signal Processor. This stimulator is the first step of an auditory evoked potential system based on the ADSP-BF533 EZ KIT LITE (Analog Devices Company - USA). The stimulator can generate arbitrary waveform like Sine Waves, Modulated Amplitude, Pulses, Bursts and Pips. The waveforms are generated through a graphical interface programmed in C++ in which the user can define the parameters of the waveform. Furthermore, the user can set the exam parameters as number of stimuli, time with stimulation (Time ON) and time without stimulus (Time OFF). In future works will be implemented another parts of the system that includes the acquirement of electroencephalogram and signal processing to estimate and analyze the evoked potential.

  15. Altered intrinsic connectivity of the auditory cortex in congenital amusia.

    Science.gov (United States)

    Leveque, Yohana; Fauvel, Baptiste; Groussard, Mathilde; Caclin, Anne; Albouy, Philippe; Platel, Hervé; Tillmann, Barbara

    2016-07-01

    Congenital amusia, a neurodevelopmental disorder of music perception and production, has been associated with abnormal anatomical and functional connectivity in a right frontotemporal pathway. To investigate whether spontaneous connectivity in brain networks involving the auditory cortex is altered in the amusic brain, we ran a seed-based connectivity analysis, contrasting at-rest functional MRI data of amusic and matched control participants. Our results reveal reduced frontotemporal connectivity in amusia during resting state, as well as an overconnectivity between the auditory cortex and the default mode network (DMN). The findings suggest that the auditory cortex is intrinsically more engaged toward internal processes and less available to external stimuli in amusics compared with controls. Beyond amusia, our findings provide new evidence for the link between cognitive deficits in pathology and abnormalities in the connectivity between sensory areas and the DMN at rest. PMID:27009161

  16. Hierarchical modeling of active materials

    International Nuclear Information System (INIS)

    Intelligent (or smart) materials are increasingly becoming key materials for use in actuators and sensors. If an intelligent material is used as a sensor, it can be embedded in a variety of structure functioning as a health monitoring system to make their life longer with high reliability. If an intelligent material is used as an active material in an actuator, it plays a key role of making dynamic movement of the actuator under a set of stimuli. This talk intends to cover two different active materials in actuators, (1) piezoelectric laminate with FGM microstructure, (2) ferromagnetic shape memory alloy (FSMA). The advantage of using the FGM piezo laminate is to enhance its fatigue life while maintaining large bending displacement, while that of use in FSMA is its fast actuation while providing a large force and stroke capability. Use of hierarchical modeling of the above active materials is a key design step in optimizing its microstructure for enhancement of their performance. I will discuss briefly hierarchical modeling of the above two active materials. For FGM piezo laminate, we will use both micromechanical model and laminate theory, while for FSMA, the modeling interfacing nano-structure, microstructure and macro-behavior is discussed. (author)

  17. Visual-induced expectations modulate auditory cortical responses

    OpenAIRE

    van Wassenhove, Virginie; Grzeczkowski, Lukasz

    2015-01-01

    Active sensing has important consequences on multisensory processing (Schroeder et al., 2010). Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient color changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the “where” and the “when” of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG) while maintaining the position of their eyes on the left, right, or cen...

  18. Auditory Brainstem Circuits That Mediate the Middle Ear Muscle Reflex

    OpenAIRE

    Mukerji, Sudeep; Windsor, Alanna Marie; Lee, Daniel J.

    2010-01-01

    The middle ear muscle (MEM) reflex is one of two major descending systems to the auditory periphery. There are two middle ear muscles (MEMs): the stapedius and the tensor tympani. In man, the stapedius contracts in response to intense low frequency acoustic stimuli, exerting forces perpendicular to the stapes superstructure, increasing middle ear impedance and attenuating the intensity of sound energy reaching the inner ear (cochlea). The tensor tympani is believed to contract in response to ...

  19. Vibrotactile activation of the auditory cortices in deaf versus hearing adults.

    Science.gov (United States)

    Auer, Edward T; Bernstein, Lynne E; Sungkarat, Witaya; Singh, Manbir

    2007-05-01

    Neuroplastic changes in auditory cortex as a result of lifelong perceptual experience were investigated. Adults with early-onset deafness and long-term hearing aid experience were hypothesized to have undergone auditory cortex plasticity due to somatosensory stimulation. Vibrations were presented on the hand of deaf and normal-hearing participants during functional MRI. Vibration stimuli were derived from speech or were a fixed frequency. Higher, more widespread activity was observed within auditory cortical regions of the deaf participants for both stimulus types. Life-long somatosensory stimulation due to hearing aid use could explain the greater activity observed with deaf participants. PMID:17426591

  20. Auditory Spatial Coding Flexibly Recruits Anterior, but Not Posterior, Visuotopic Parietal Cortex.

    Science.gov (United States)

    Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C

    2016-03-01

    Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996

  1. Speech motor learning changes the neural response to both auditory and somatosensory signals

    Science.gov (United States)

    Ito, Takayuki; Coppola, Joshua H.; Ostry, David J.

    2016-01-01

    In the present paper, we present evidence for the idea that speech motor learning is accompanied by changes to the neural coding of both auditory and somatosensory stimuli. Participants in our experiments undergo adaptation to altered auditory feedback, an experimental model of speech motor learning which like visuo-motor adaptation in limb movement, requires that participants change their speech movements and associated somatosensory inputs to correct for systematic real-time changes to auditory feedback. We measure the sensory effects of adaptation by examining changes to auditory and somatosensory event-related responses. We find that adaptation results in progressive changes to speech acoustical outputs that serve to correct for the perturbation. We also observe changes in both auditory and somatosensory event-related responses that are correlated with the magnitude of adaptation. These results indicate that sensory change occurs in conjunction with the processes involved in speech motor adaptation. PMID:27181603

  2. Pitch-induced responses in the right auditory cortex correlate with musical ability in normal listeners.

    Science.gov (United States)

    Puschmann, Sebastian; Özyurt, Jale; Uppenkamp, Stefan; Thiel, Christiane M

    2013-10-23

    Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system. PMID:23995293

  3. A Psychophysical Imaging Method Evidencing Auditory Cue Extraction during Speech Perception: A Group Analysis of Auditory Classification Images

    OpenAIRE

    Varnet, Léo; Knoblauch, Kenneth; Serniclaes, Willy; Meunier, Fanny; Hoen, Michel

    2015-01-01

    Although there is a large consensus regarding the involvement of specific acoustic cues in speech perception, the precise mechanisms underlying the transformation from continuous acoustical properties into discrete perceptual units remains undetermined. This gap in knowledge is partially due to the lack of a turnkey solution for isolating critical speech cues from natural stimuli. In this paper, we describe a psychoacoustic imaging method known as the Auditory Classification Image technique t...

  4. Multimodal Hierarchical Dirichlet Process-based Active Perception

    OpenAIRE

    Taniguchi, Tadahiro; Takano, Toshiaki; Yoshino, Ryo

    2015-01-01

    In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine t...

  5. The role of auditory transient and deviance processing in distraction of task performance: a combined behavioral and event-related brain potential study

    OpenAIRE

    Stefan Berti

    2013-01-01

    Distraction of goal-oriented performance by a sudden change in the auditory environment is an everyday life experience. Different types of changes can be distracting, including a sudden onset of a transient sound and a slight deviation of otherwise regular auditory background stimulation. With regard to deviance detection, it is assumed that slight changes in a continuous sequence of auditory stimuli are detected by a predictive coding mechanisms and it has been demonstrated that this mechani...

  6. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  7. Integration of Auditory and Visual Communication Information in the Primate Ventrolateral Prefrontal Cortex

    OpenAIRE

    Sugihara, T.; Diltz, M. D.; Averbeck, B. B.; Romanski, L. M.

    2006-01-01

    The integration of auditory and visual stimuli is crucial for recognizing objects, communicating effectively, and navigating through our complex world. Although the frontal lobes are involved in memory, communication, and language, there has been no evidence that the integration of communication information occurs at the single-cell level in the frontal lobes. Here, we show that neurons in the macaque ventrolateral prefrontal cortex (VLPFC) integrate audiovisual communication stimuli. The mul...

  8. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments...

  9. Headphone localization of speech stimuli

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1991-01-01

    Recently, three dimensional acoustic display systems have been developed that synthesize virtual sound sources over headphones based on filtering by Head-Related Transfer Functions (HRTFs), the direction-dependent spectral changes caused primarily by the outer ears. Here, 11 inexperienced subjects judged the apparent spatial location of headphone-presented speech stimuli filtered with non-individualized HRTFs. About half of the subjects 'pulled' their judgements toward either the median or the lateral-vertical planes, and estimates were almost always elevated. Individual differences were pronounced for the distance judgements; 15 to 46 percent of stimuli were heard inside the head with the shortest estimates near the median plane. The results infer that most listeners can obtain useful azimuth information from speech stimuli filtered by nonindividualized RTFs. Measurements of localization error and reversal rates are comparable with a previous study that used broadband noise stimuli.

  10. Listener orientation and spatial judgments of elevated auditory percepts

    Science.gov (United States)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  11. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  12. Which People with Specific Language Impairment have Auditory Processing Deficits?

    Science.gov (United States)

    McArthur, G M; Bishop, D V M

    2004-02-01

    An influential theory attributes developmental disorders of language and literacy to low-level auditory perceptual difficulties. However, evidence to date has been inconsistent and contradictory. We investigated whether this mixed picture could be explained in terms of heterogeneity in the language-impaired population. In Experiment 1, the behavioural responses of 16 people with specific language impairment (SLI) and 16 control listeners (aged 10 to 19 years) to auditory backward recognition masking (ABRM) stimuli and unmasked tones indicated that a subgroup of people with SLI are less able to discriminate between the frequencies of sounds regardless of their rate of presentation. Further, these people tended to be the younger participants, and were characterised by relatively poor nonword reading. In Experiment 2, the auditory event-related potentials (ERPs) of the same groups to unmasked tones were measured. Listeners with SLI tended to have age-inappropriate waveforms in the N1-P2-N2 region, regardless of their auditory discrimination scores in Experiment 1. Together, these results suggest that SLI may be characterised by immature development of auditory cortex, such that adult-level frequency discrimination performance is attained several years later than normal. PMID:21038192

  13. The auditory N1 suppression rebounds as prediction persists over time.

    Science.gov (United States)

    Hsu, Yi-Fang; Hämäläinen, Jarmo A; Waszak, Florian

    2016-04-01

    The predictive coding model of perception proposes that neuronal responses reflect prediction errors. Repeated as well as predicted stimuli trigger suppressed neuronal responses because they are associated with reduced prediction errors. However, many predictable events in our environment are not isolated but sequential, yet there is little empirical evidence documenting how suppressed neuronal responses reflecting reduced prediction errors change in the course of a predictable sequence of events. Here we conceived an auditory electroencephalography (EEG) experiment where prediction persists over series of four tones to allow for the delineation of the dynamics of the suppressed neuronal responses. It is possible that neuronal responses might decrease for the initial predictable stimuli and stay at the same level across the rest of the sequence, suggesting that they reflect the predictability of the stimuli in terms of mere probability. Alternatively, neuronal responses might decrease for the initial predictable stimuli and gradually recover across the rest of the sequence, suggesting that factors other than mere probability have to be considered in order to account for the way prediction is implemented in the brain. We found that initial presentation of the predictable stimuli was associated with suppression of the auditory N1. Further presentation of the predictable stimuli was associated with a rebound of the component's amplitude. Moreover, such pattern was independent of attention. The findings suggest that auditory N1 suppression reflecting reduced prediction errors is a transient phenomenon that can be modulated by multiple factors. PMID:26921479

  14. Processamento auditivo em idosos: estudo da interação por meio de testes com estímulos verbais e não-verbais Auditory processing in elderly people: interaction study by means of verbal and nonverbal stimuli

    Directory of Open Access Journals (Sweden)

    Maria Madalena Canina Pinheiro

    2004-04-01

    Full Text Available Em função do processo de envelhecimento, surge a perda auditiva, conhecida como presbiacusia que, além da perda auditiva, é acompanhada por um declínio do funcionamento auditivo. OBJETIVO: caracterizar o aspecto da interação de sons verbais e não-verbais em idosos com e sem perda auditiva por meio dos testes de Localização Sonora em Cinco Direções, Fusão Binaural e do Teste Pediátrico de Inteligibilidade de Fala em escuta Monótica (Pediatric Sentence Identification - PSI-MCI, levando em conta cada procedimento e o grau da perda auditiva. FORMA DE ESTUDO: Estudo clínico com coorte transversal. MATERIAL E MÉTODO: 110 idosos, na faixa etária dos 60 a 85 anos com audição normal ou com perda auditiva neurossensorial de grau até moderadamente-severo simétrica foram incluídos neste estudo. O comportamento auditivo comum a todos os testes selecionados foi denominado de interação. A análise foi feita por procedimento isolado e pelo grau da perda auditiva. RESULTADOS: Ocorreram mais indivíduos com inabilidade no teste de Fusão Binaural. Os procedimentos que apresentaram uma dependência estatisticamente significante com o grau da perda auditiva foram o teste de Localização Sonora e PSI-MCI (-10. CONCLUSÃO: Idosos apresentam dificuldade no processo de interação binaural quando a informação auditiva não está completa. O grau da perda auditiva interferiu principalmente no comportamentos auditivo de localização.Presbyacusis is a hearing loss combined with functional auditory decline due to the aging process. AIM: The aim of this study is to characterize verbal and nonverbal sound interaction aspects in elderly individuals with and without hearing loss by means of Binaural Fusion Test, Sound Localization Test at five directions and Pediatric Sentence Identification (PSI, taking into consideration each procedure and hearing loss magnitude. STUDY DESIGN: Clinical study with transversal cohort. MATERIAL AND METHOD: A number

  15. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them, the...

  16. Hierarchical Models of Attitude.

    Science.gov (United States)

    Reddy, Srinivas K.; LaBarbera, Priscilla A.

    1985-01-01

    The application and use of hierarchical models is illustrated, using the example of the structure of attitudes toward a new product and a print advertisement. Subjects were college students who responded to seven-point bipolar scales. Hierarchical models were better than nonhierarchical models in conceptualizing attitude but not intention. (GDC)

  17. Hierarchical quantum communication

    International Nuclear Information System (INIS)

    A general approach to study the hierarchical quantum information splitting (HQIS) is proposed and the same is used to systematically investigate the possibility of realizing HQIS using different classes of 4-qubit entangled states that are not connected by stochastic local operations and classical communication (SLOCC). Explicit examples of HQIS using 4-qubit cluster state and 4-qubit |Ω〉 state are provided. Further, the proposed HQIS scheme is generalized to introduce two new aspects of hierarchical quantum communication. To be precise, schemes of probabilistic hierarchical quantum information splitting and hierarchical quantum secret sharing are obtained by modifying the proposed HQIS scheme. A number of practical situations where hierarchical quantum communication would be of use, are also presented.

  18. A songbird forebrain area potentially involved in auditory discrimination and memory formation

    Indian Academy of Sciences (India)

    Raphael Pinaud; Thomas A Terleph

    2008-03-01

    Songbirds rely on auditory processing of natural communication signals for a number of social behaviors, including mate selection, individual recognition and the rare behavior of vocal learning – the ability to learn vocalizations through imitation of an adult model, rather than by instinct. Like mammals, songbirds possess a set of interconnected ascending and descending auditory brain pathways that process acoustic information and that are presumably involved in the perceptual processing of vocal communication signals. Most auditory areas studied to date are located in the caudomedial forebrain of the songbird and include the thalamo-recipient field L (subfields L1, L2 and L3), the caudomedial and caudolateral mesopallium (CMM and CLM, respectively) and the caudomedial nidopallium (NCM). This review focuses on NCM, an auditory area previously proposed to be analogous to parts of the primary auditory cortex in mammals. Stimulation of songbirds with auditory stimuli drives vigorous electrophysiological responses and the expression of several activity-regulated genes in NCM. Interestingly, NCM neurons are tuned to species-specific songs and undergo some forms of experience-dependent plasticity in-vivo. These activity-dependent changes may underlie long-term modifications in the functional performance of NCM and constitute a potential neural substrate for auditory discrimination. We end this review by discussing evidence that suggests that NCM may be a site of auditory memory formation and/or storage.

  19. Grasping the sound: Auditory pitch influences size processing in motor planning.

    Science.gov (United States)

    Rinaldi, Luca; Lega, Carlotta; Cattaneo, Zaira; Girelli, Luisa; Bernardi, Nicolò Francesco

    2016-01-01

    Growing evidence shows that individuals consistently match auditory pitch with visual size. For instance, high-pitched sounds are perceptually associated with smaller visual stimuli, whereas low-pitched sounds with larger ones. The present study explores whether this crossmodal correspondence, reported so far for perceptual processing, also modulates motor planning. To address this issue, we carried out a series of kinematic experiments to verify whether actions implying size processing are affected by auditory pitch. Experiment 1 showed that grasping movements toward small/large objects were initiated faster in response to high/low pitches, respectively, thus extending previous findings in the literature to more complex motor behavior. Importantly, auditory pitch influenced the relative scaling of the hand preshaping, with high pitches associated with smaller grip aperture compared with low pitches. Notably, no effect of auditory pitch was found in case of pointing movements (no grasp implied, Experiment 2), as well as when auditory pitch was irrelevant to the programming of the grip aperture, that is, in case of grasping an object of uniform size (Experiment 3). Finally, auditory pitch influenced also symbolic manual gestures expressing "small" and "large" concepts (Experiment 4). In sum, our results are novel in revealing the impact of auditory pitch on motor planning when size processing is required, and shed light on the role of auditory information in driving actions. (PsycINFO Database Record PMID:26280267

  20. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  1. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  2. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  3. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  4. Core auditory processing deficits in primary progressive aphasia

    Science.gov (United States)

    Grube, Manon; Bruffaerts, Rose; Schaeverbeke, Jolien; Neyens, Veerle; De Weer, An-Sofie; Seghers, Alexandra; Bergmans, Bruno; Dries, Eva; Griffiths, Timothy D.

    2016-01-01

    The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using non-linguistic stimuli devoid of meaning we assessed three key domains of auditory processing (pitch, timing and timbre) in a consecutive series of 18 patients with primary progressive aphasia (eight with semantic variant, six with non-fluent/agrammatic variant, and four with logopenic variant), as well as 28 age-matched healthy controls. We further examined whether performance on the psychoacoustic tasks in the three domains related to the patients’ speech and language and neuropsychological profile. At the group level, patients were significantly impaired in the three domains. Patients had the most marked deficits within the rhythm domain for the processing of short sequences of up to seven tones. Patients with the non-fluent variant showed the most pronounced deficits at the group and the individual level. A subset of patients with the semantic variant were also impaired, though less severely. The patients with the logopenic variant did not show any significant impairments. Significant deficits in the non-fluent and the semantic variant remained after partialling out effects of executive dysfunction. Performance on a subset of the psychoacoustic tests correlated with conventional verbal repetition tests. In sum, a core central auditory impairment exists in primary progressive aphasia for non-linguistic stimuli. While the non-fluent variant is clinically characterized by a motor speech deficit (output problem), perceptual processing of tone sequences is clearly deficient. This may indicate the co-occurrence in the non-fluent variant of a deficit in working memory for auditory objects. Parsimoniously we propose that auditory timing pathways are altered, which are used in common for processing acoustic sequence structure in both speech output and acoustic input. PMID:27060523

  5. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  6. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  7. Core auditory processing deficits in primary progressive aphasia.

    Science.gov (United States)

    Grube, Manon; Bruffaerts, Rose; Schaeverbeke, Jolien; Neyens, Veerle; De Weer, An-Sofie; Seghers, Alexandra; Bergmans, Bruno; Dries, Eva; Griffiths, Timothy D; Vandenberghe, Rik

    2016-06-01

    The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using non-linguistic stimuli devoid of meaning we assessed three key domains of auditory processing (pitch, timing and timbre) in a consecutive series of 18 patients with primary progressive aphasia (eight with semantic variant, six with non-fluent/agrammatic variant, and four with logopenic variant), as well as 28 age-matched healthy controls. We further examined whether performance on the psychoacoustic tasks in the three domains related to the patients' speech and language and neuropsychological profile. At the group level, patients were significantly impaired in the three domains. Patients had the most marked deficits within the rhythm domain for the processing of short sequences of up to seven tones. Patients with the non-fluent variant showed the most pronounced deficits at the group and the individual level. A subset of patients with the semantic variant were also impaired, though less severely. The patients with the logopenic variant did not show any significant impairments. Significant deficits in the non-fluent and the semantic variant remained after partialling out effects of executive dysfunction. Performance on a subset of the psychoacoustic tests correlated with conventional verbal repetition tests. In sum, a core central auditory impairment exists in primary progressive aphasia for non-linguistic stimuli. While the non-fluent variant is clinically characterized by a motor speech deficit (output problem), perceptual processing of tone sequences is clearly deficient. This may indicate the co-occurrence in the non-fluent variant of a deficit in working memory for auditory objects. Parsimoniously we propose that auditory timing pathways are altered, which are used in common for processing acoustic sequence structure in both speech output and acoustic input. PMID:27060523

  8. Odors Bias Time Perception in Visual and Auditory Modalities

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  9. Processing of acoustic motion in the auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi

    OpenAIRE

    Firzlaff, Uwe

    2001-01-01

    This study investigated the representation of acoustic motion in different fields of auditory cortex of the rufous horseshoe bat, Rhinolophus rouxi. Motion in horizontal direction (azimuth) was simulated using successive stimuli with dynamically changing interaural intensity differences presented via earphones. The mechanisms underlying a specific sensitivity of neurons to the direction of motion were investigated using microiontophoretic application of γ-aminobutyric acid (GAB...

  10. A Persian version of the sustained auditory attention capacity test and its results in normal children

    Directory of Open Access Journals (Sweden)

    Sanaz Soltanparast

    2013-03-01

    Full Text Available Background and Aim: Sustained attention refers to the ability to maintain attention in target stimuli over a sustained period of time. This study was conducted to develop a Persian version of the sustained auditory attention capacity test and to study its results in normal children.Methods: To develop the Persian version of the sustained auditory attention capacity test, like the original version, speech stimuli were used. The speech stimuli consisted of one hundred monosyllabic words consisting of a 20 times random of and repetition of the words of a 21-word list of monosyllabic words, which were randomly grouped together. The test was carried out at comfortable hearing level using binaural, and diotic presentation modes on 46 normal children of 7 to 11 years of age of both gender.Results: There was a significant difference between age, and an average of impulsiveness error score (p=0.004 and total score of sustained auditory attention capacity test (p=0.005. No significant difference was revealed between age, and an average of inattention error score and attention reduction span index. Gender did not have a significant impact on various indicators of the test.Conclusion: The results of this test on a group of normal hearing children confirmed its ability to measure sustained auditory attention capacity through speech stimuli.

  11. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    Science.gov (United States)

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  12. Electrophysiological Auditory Responses and Language Development in Infants with Periventricular Leukomalacia

    Science.gov (United States)

    Avecilla-Ramirez, G. N.; Ruiz-Correa, S.; Marroquin, J. L.; Harmony, T.; Alba, A.; Mendoza-Montoya, O.

    2011-01-01

    This study presents evidence suggesting that electrophysiological responses to language-related auditory stimuli recorded at 46 weeks postconceptional age (PCA) are associated with language development, particularly in infants with periventricular leukomalacia (PVL). In order to investigate this hypothesis, electrophysiological responses to a set…

  13. The Nature of Auditory Discrimination Problems in Children with Specific Language Impairment: An MMN Study

    Science.gov (United States)

    Davids, Nina; Segers, Eliane; van den Brink, Danielle; Mitterer, Holger; van Balkom, Hans; Hagoort, Peter; Verhoeven, Ludo

    2011-01-01

    Many children with specific language impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in…

  14. Spontaneous high-gamma band activity reflects functional organization of auditory cortex in the awake macaque.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2012-06-01

    In the absence of sensory stimuli, spontaneous activity in the brain has been shown to exhibit organization at multiple spatiotemporal scales. In the macaque auditory cortex, responses to acoustic stimuli are tonotopically organized within multiple, adjacent frequency maps aligned in a caudorostral direction on the supratemporal plane (STP) of the lateral sulcus. Here, we used chronic microelectrocorticography to investigate the correspondence between sensory maps and spontaneous neural fluctuations in the auditory cortex. We first mapped tonotopic organization across 96 electrodes spanning approximately two centimeters along the primary and higher auditory cortex. In separate sessions, we then observed that spontaneous activity at the same sites exhibited spatial covariation that reflected the tonotopic map of the STP. This observation demonstrates a close relationship between functional organization and spontaneous neural activity in the sensory cortex of the awake monkey. PMID:22681693

  15. Can place-specific cochlear dispersion be represented by auditory steady-state responses?

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Epp, Bastian; Dau, Torsten

    2016-01-01

    The present study investigated to what extent properties of local cochlear dispersion can be objectively assessed through auditory steady-state responses (ASSR). The hypothesis was that stimuli compensating for the phase response at a particular cochlear location generate a maximally modulated...... basilar membrane (BM) response at that BM position, due to the large "within-channel" synchrony of activity. This would lead, in turn, to a larger ASSR amplitude than other stimuli of corresponding intensity and bandwidth. Two stimulus types were chosen: 1] Harmonic tone complexes consisting of equal......-amplitude tones with a starting phase following an algorithm developed by Schroeder [IEEE Trans. Inf. Theory 16, 85-89 (1970)] that have earlier been considered in behavioral studies to estimate human auditory filter phase responses; and 2] simulations of auditory-filter impulse responses (IR). In both cases...

  16. Auditory Learning. Dimensions in Early Learning Series.

    Science.gov (United States)

    Zigmond, Naomi K.; Cicci, Regina

    The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…

  17. The simultaneous perception of auditory–tactile stimuli in voluntary movement

    OpenAIRE

    Hao, Qiao; Ogata, Taiki; Ogawa, Ken-ichiro; Kwon, Jinhwan; Miyake, Yoshihiro

    2015-01-01

    The simultaneous perception of multimodal information in the environment during voluntary movement is very important for effective reactions to the environment. Previous studies have found that voluntary movement affects the simultaneous perception of auditory and tactile stimuli. However, the results of these experiments are not completely consistent, and the differences may be attributable to methodological differences in the previous studies. In this study, we investigated the effect of vo...

  18. Thoughts of Death Modulate Psychophysical and Cortical Responses to Threatening Stimuli

    OpenAIRE

    Valentini, Elia; Koch, Katharina; Aglioti, Salvatore Maria

    2014-01-01

    Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we e...

  19. Auditory and speech processing and reading development in Chinese school children: behavioural and ERP evidence.

    Science.gov (United States)

    Meng, Xiangzhi; Sai, Xiaoguang; Wang, Cixin; Wang, Jue; Sha, Shuying; Zhou, Xiaolin

    2005-11-01

    By measuring behavioural performance and event-related potentials (ERPs) this study investigated the extent to which Chinese school children's reading development is influenced by their skills in auditory, speech, and temporal processing. In Experiment 1, 102 normal school children's performance in pure tone temporal order judgment, tone frequency discrimination, temporal interval discrimination and composite tone pattern discrimination was measured. Results showed that children's auditory processing skills correlated significantly with their reading fluency, phonological awareness, word naming latency, and the number of Chinese characters learned. Regression analyses found that tone temporal order judgment, temporal interval discrimination and composite tone pattern discrimination could account for 32% of variance in phonological awareness. Controlling for the effect of phonological awareness, auditory processing measures still contributed significantly to variance in reading fluency and character naming. In Experiment 2, mismatch negativities (MMN) in event-related brain potentials were recorded from dyslexic children and the matched normal children, while these children listened passively to Chinese syllables and auditory stimuli composed of pure tones. The two groups of children did not differ in MMN to stimuli deviated in pure tone frequency and Chinese lexical tones. But dyslexic children showed smaller MMN to stimuli deviated in initial consonants or vowels of Chinese syllables and to stimuli deviated in temporal information of composite tone patterns. These results suggested that Chinese dyslexic children have deficits in auditory temporal processing as well as in linguistic processing and that auditory and temporal processing is possibly as important to reading development of children in a logographic writing system as in an alphabetic system. PMID:16355749

  20. Sub-second temporal processing : effects of modality and spatial change on brief visual and auditory time judgments

    OpenAIRE

    Retsa, Chryssoula

    2013-01-01

    The present thesis set out to investigate how sensory modality and spatial presentation influence visual and auditory duration judgments in the millisecond range. The effects of modality and spatial location were explored by considering right and left side presentations of mixed or blocked visual and auditory stimuli. Several studies have shown that perceived duration of a stimulus can be affected by various extra-temporal factors such as modality and spatial position. Audit...

  1. The auditory characteristics of children with inner auditory canal stenosis.

    Science.gov (United States)

    Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo

    2016-07-01

    Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851

  2. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  3. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites with...... nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them, the...... investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  4. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  5. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...... with changing and increasing demands. Two-layer networks consist of one backbone network, which interconnects cluster networks. The clusters consist of nodes and links, which connect the nodes. One node in each cluster is a hub node, and the backbone interconnects the hub nodes of each cluster and thus...... the clusters. The design of hierarchical networks involves clustering of nodes, hub selection, and network design, i.e. selection of links and routing of ows. Hierarchical networks have been in use for decades, but integrated design of these networks has only been considered for very special types of networks...

  6. Emergence of tuning to natural stimulus statistics along the central auditory pathway.

    Directory of Open Access Journals (Sweden)

    Jose A Garcia-Lazaro

    Full Text Available We have previously shown that neurons in primary auditory cortex (A1 of anaesthetized (ketamine/medetomidine ferrets respond more strongly and reliably to dynamic stimuli whose statistics follow "natural" 1/f dynamics than to stimuli exhibiting pitch and amplitude modulations that are faster (1/f(0.5 or slower (1/f(2 than 1/f. To investigate where along the central auditory pathway this 1/f-modulation tuning arises, we have now characterized responses of neurons in the central nucleus of the inferior colliculus (ICC and the ventral division of the mediate geniculate nucleus of the thalamus (MGV to 1/f(γ distributed stimuli with γ varying between 0.5 and 2.8. We found that, while the great majority of neurons recorded from the ICC showed a strong preference for the most rapidly varying (1/f(0.5 distributed stimuli, responses from MGV neurons did not exhibit marked or systematic preferences for any particular γ exponent. Only in A1 did a majority of neurons respond with higher firing rates to stimuli in which γ takes values near 1. These results indicate that 1/f tuning emerges at forebrain levels of the ascending auditory pathway.

  7. Inducing attention not to blink: auditory entrainment improves conscious visual processing.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Szűcs, Dénes; Facoetti, Andrea

    2016-09-01

    Our ability to allocate attention at different moments in time can sometimes fail to select stimuli occurring in close succession, preventing visual information from reaching awareness. This so-called attentional blink (AB) occurs when the second of two targets (T2) is presented closely after the first (T1) in a rapid serial visual presentation (RSVP). We hypothesized that entrainment to a rhythmic stream of stimuli-before visual targets appear-would reduce the AB. Experiment 1 tested the effect of auditory entrainment by presenting sounds with a regular or irregular interstimulus interval prior to a RSVP where T1 and T2 were separated by three possible lags (1, 3 and 8). Experiment 2 examined visual entrainment by presenting visual stimuli in place of auditory stimuli. Results revealed that irrespective of sensory modality, arrhythmic stimuli preceding the RSVP triggered an alerting effect that improved the T2 identification at lag 1, but impaired the recovery from the AB at lag 8. Importantly, only auditory rhythmic entrainment was effective in reducing the AB at lag 3. Our findings demonstrate that manipulating the pre-stimulus condition can reduce deficits in temporal attention characterizing the human cognitive architecture, suggesting innovative trainings for acquired and neurodevelopmental disorders. PMID:26215434

  8. Thoughts of death modulate psychophysical and cortical responses to threatening stimuli.

    Science.gov (United States)

    Valentini, Elia; Koch, Katharina; Aglioti, Salvatore Maria

    2014-01-01

    Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS). Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations. PMID:25386905

  9. Event-related desynchronization of frontal-midline theta rhythm during preconscious auditory oddball processing.

    Science.gov (United States)

    Kawamata, Masaru; Kirino, Eiji; Inoue, Reiichi; Arai, Heii

    2007-10-01

    The goal of this study was to explore the frontal-midline theta rhythm (Fm theta) generation mechanism employing event-related desynchronization/synchronization (ERD/ERS) analysis in relation to task-irrelevant external stimuli. A dual paradigm was employed: a videogame and the simultaneous presentation of passive auditory oddball stimuli. We analyzed the data concerning ERD/ERS using both Fast Fourier Transformation (FFT) and wavelet transform (WT). In the FFT data, during the periods with appearance of Fm theta, apparent ERD of the theta band was observed at Fz and Cz. ERD when Fm theta was present was much more prominent than when Fm theta was absent. In the WT data, as in the FFT data, ERD was seen again, but in this case the ERD was preceded by ERS during both the periods with and without Fm theta. Furthermore, the WT analysis indicated that ERD was followed by ERS during the periods without Fm theta. However, during Fm theta, no apparent ERS following ERD was seen. In our study, Fm theta was desynchronized by the auditory stimuli that were independent of the video game task used to evoke the Fm theta. The ERD of Fm theta might be reflecting the mechanism of "positive suppression" to process external auditory stimuli automatically and preventing attentional resources from being unnecessarily allocated to those stimuli. Another possibility is that Fm theta induced by our dual paradigm may reflect information processing modeled by multi-item working memory requirements for playing the videogame and the simultaneous auditory processing using a memory trace. ERS in the WT data without Fm theta might indicate further processing of the auditory information free from "positive suppression" control reflected by Fm theta. PMID:17993201

  10. Hierarchical Communication Diagrams

    OpenAIRE

    Marcin Szpyrka; Piotr Matyasik; Jerzy Biernacki; Agnieszka Biernacka; Michał Wypych; Leszek Kotulski

    2016-01-01

    Formal modelling languages range from strictly textual ones like process algebra scripts to visual modelling languages based on hierarchical graphs like coloured Petri nets. Approaches equipped with visual modelling capabilities make developing process easier and help users to cope with more complex systems. Alvis is a modelling language that combines possibilities of formal models verification with flexibility and simplicity of practical programming languages. The paper deals with hierarchic...

  11. Hierarchical Dirichlet Scaling Process

    OpenAIRE

    Kim, Dongwoo; Oh, Alice

    2014-01-01

    We present the \\textit{hierarchical Dirichlet scaling process} (HDSP), a Bayesian nonparametric mixed membership model. The HDSP generalizes the hierarchical Dirichlet process (HDP) to model the correlation structure between metadata in the corpus and mixture components. We construct the HDSP based on the normalized gamma representation of the Dirichlet process, and this construction allows incorporating a scaling function that controls the membership probabilities of the mixture components. ...

  12. Emotional stimuli and motor conversion disorder

    OpenAIRE

    Voon, V; Brezing, C.; Gallea, C; Ameli, R.; Roelofs, K.; LaFrance Jr, W.C.; Hallett, M

    2010-01-01

    Conversion disorder is characterized by neurological signs and symptoms related to an underlying psychological issue. Amygdala activity to affective stimuli is well characterized in healthy volunteers with greater amygdala activity to both negative and positive stimuli relative to neutral stimuli, and greater activity to negative relative to positive stimuli. We investigated the relationship between conversion disorder and affect by assessing amygdala activity to affective stimuli. We conduct...

  13. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica Gori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  14. Reproducibility and discriminability of brain patterns of semantic categories enhanced by congruent audiovisual stimuli.

    Directory of Open Access Journals (Sweden)

    Yuanqing Li

    Full Text Available One of the central questions in cognitive neuroscience is the precise neural representation, or brain pattern, associated with a semantic category. In this study, we explored the influence of audiovisual stimuli on the brain patterns of concepts or semantic categories through a functional magnetic resonance imaging (fMRI experiment. We used a pattern search method to extract brain patterns corresponding to two semantic categories: "old people" and "young people." These brain patterns were elicited by semantically congruent audiovisual, semantically incongruent audiovisual, unimodal visual, and unimodal auditory stimuli belonging to the two semantic categories. We calculated the reproducibility index, which measures the similarity of the patterns within the same category. We also decoded the semantic categories from these brain patterns. The decoding accuracy reflects the discriminability of the brain patterns between two categories. The results showed that both the reproducibility index of brain patterns and the decoding accuracy were significantly higher for semantically congruent audiovisual stimuli than for unimodal visual and unimodal auditory stimuli, while the semantically incongruent stimuli did not elicit brain patterns with significantly higher reproducibility index or decoding accuracy. Thus, the semantically congruent audiovisual stimuli enhanced the within-class reproducibility of brain patterns and the between-class discriminability of brain patterns, and facilitate neural representations of semantic categories or concepts. Furthermore, we analyzed the brain activity in superior temporal sulcus and middle temporal gyrus (STS/MTG. The strength of the fMRI signal and the reproducibility index were enhanced by the semantically congruent audiovisual stimuli. Our results support the use of the reproducibility index as a potential tool to supplement the fMRI signal amplitude for evaluating multimodal integration.

  15. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning. PMID:26448497

  16. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.;

    2013-01-01

    human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...... vocalization, compared with during passive listening. One network of regions appears to encode an “error signal” regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across...... presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple...

  17. Using auditory-visual speech to probe the basis of noise-impaired consonant-vowel perception in dyslexia and auditory neuropathy

    Science.gov (United States)

    Ramirez, Joshua; Mann, Virginia

    2005-08-01

    Both dyslexics and auditory neuropathy (AN) subjects show inferior consonant-vowel (CV) perception in noise, relative to controls. To better understand these impairments, natural acoustic speech stimuli that were masked in speech-shaped noise at various intensities were presented to dyslexic, AN, and control subjects either in isolation or accompanied by visual articulatory cues. AN subjects were expected to benefit from the pairing of visual articulatory cues and auditory CV stimuli, provided that their speech perception impairment reflects a relatively peripheral auditory disorder. Assuming that dyslexia reflects a general impairment of speech processing rather than a disorder of audition, dyslexics were not expected to similarly benefit from an introduction of visual articulatory cues. The results revealed an increased effect of noise masking on the perception of isolated acoustic stimuli by both dyslexic and AN subjects. More importantly, dyslexics showed less effective use of visual articulatory cues in identifying masked speech stimuli and lower visual baseline performance relative to AN subjects and controls. Last, a significant positive correlation was found between reading ability and the ameliorating effect of visual articulatory cues on speech perception in noise. These results suggest that some reading impairments may stem from a central deficit of speech processing.

  18. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    of Brodmann, more intense in the contralateral (right) side. There is activation of both frontal executive areas without lateralization. Simultaneously, while area 39 of Brodmann was being activated, the temporal lobe was being inhibited. This seemingly not previously reported functional observation is suggestive that also inhibitory and not only excitatory relays play a role in the auditory pathways. The central activity in our patient (without external auditory stimuli) -who was tested while having musical hallucinations- was a mirror image of that of our normal stimulated volunteers. It is suggested that the trigger role of the inner ear -if any- could conceivably be inhibitory, desinhibitory and not necessarily purely excitatory. Based on our observations the trigger effect in our patient, could occur via the left ear. Finally, our functional studies are suggestive that auditory memory for musical perceptions could be seemingly located in the right area 39 of Brodm

  19. An investigation of auditory contagious yawning.

    Science.gov (United States)

    Arnott, Stephen R; Singhal, Anthony; Goodale, Melvyn A

    2009-09-01

    Despite a widespread familiarity with the often compelling urge to yawn after perceiving someone else yawn, an understanding of the neural mechanism underlying contagious yawning remains incomplete. In the present auditory fMRI study, listeners used a 4-point scale to indicate how much they felt like yawning following the presentation of a yawn, breath, or scrambled yawn sound. Not only were yawn sounds given significantly higher ratings, a trait positively correlated with each individual's empathy measure, but relative to control stimuli, random effects analyses revealed enhanced hemodynamic activity in the right posterior inferior frontal gyrus (pIFG) in response to hearing yawns. Moreover, pIFG activity was greatest for yawn stimuli associated with high as opposed to low yawn ratings and for control sounds associated with equally high yawn ratings. These results support a relationship between contagious yawning and empathy and provide evidence for pIFG involvement in contagious yawning. A supplemental figure for this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental. PMID:19679768

  20. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception.

    Science.gov (United States)

    Su, Yi-Huang; Salazar-López, Elvira

    2016-01-01

    Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance. PMID:27313900

  1. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  2. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders;

    2016-01-01

    widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain’s motor network with no difference between groups. No...

  3. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  4. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  5. Auditory Cortical Plasticity Drives Training-Induced Cognitive Changes in Schizophrenia.

    Science.gov (United States)

    Dale, Corby L; Brown, Ethan G; Fisher, Melissa; Herman, Alexander B; Dowling, Anne F; Hinkley, Leighton B; Subramaniam, Karuna; Nagarajan, Srikantan S; Vinogradov, Sophia

    2016-01-01

    Schizophrenia is characterized by dysfunction in basic auditory processing, as well as higher-order operations of verbal learning and executive functions. We investigated whether targeted cognitive training of auditory processing improves neural responses to speech stimuli, and how these changes relate to higher-order cognitive functions. Patients with schizophrenia performed an auditory syllable identification task during magnetoencephalography before and after 50 hours of either targeted cognitive training or a computer games control. Healthy comparison subjects were assessed at baseline and after a 10 week no-contact interval. Prior to training, patients (N = 34) showed reduced M100 response in primary auditory cortex relative to healthy participants (N = 13). At reassessment, only the targeted cognitive training patient group (N = 18) exhibited increased M100 responses. Additionally, this group showed increased induced high gamma band activity within left dorsolateral prefrontal cortex immediately after stimulus presentation, and later in bilateral temporal cortices. Training-related changes in neural activity correlated with changes in executive function scores but not verbal learning and memory. These data suggest that computerized cognitive training that targets auditory and verbal learning operations enhances both sensory responses in auditory cortex as well as engagement of prefrontal regions, as indexed during an auditory processing task with low demands on working memory. This neural circuit enhancement is in turn associated with better executive function but not verbal memory. PMID:26152668

  6. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, M.

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  7. Tagging multimedia stimuli with ontologies

    OpenAIRE

    Horvat, Marko; Popovic, Sinisa; Bogunovic, Nikola; Cosic, Kresimir

    2009-01-01

    Successful management of emotional stimuli is a pivotal issue concerning Affective Computing (AC) and the related research. As a subfield of Artificial Intelligence, AC is concerned not only with the design of computer systems and the accompanying hardware that can recognize, interpret, and process human emotions, but also with the development of systems that can trigger human emotional response in an ordered and controlled manner. This requires the maximum attainable precision and efficiency...

  8. Pavlovian conditioning with proximal stimuli.

    Science.gov (United States)

    Lachnit, H; Bohn, A

    1986-01-01

    This experiment was conducted with the objective of demonstrating that the effective stimuli in Pavlovian Conditioning are not environmental stimuli but internal physiological processes elicited by environmental input (proximal stimuli). In order to achieve the objective, afterimages in color vision were used: looking at a diffuse lightened circle after seeing a red circle yields an image of a green circle. A differential conditioning paradigm with two sequential compounds was run. In one group (G+B-: n1 = 10), a red circle followed by a green circle was paired with shock, whereas a red circle followed by a blue circle remained unpaired. A second group (G-B+: n2 = 10) received red-blue paired trials and unpaired red-green trials. Immediately after that training, subjects were tested with a new, never trained sequential compound: a red circle followed by a diffuse lightened circle. Furthermore, they were tested with the already trained compounds. Taking the environmental point of view, the never trained stimulus should elicit an orienting response lying in between the excitatory reaction to the paired stimulus and the inhibitory reaction to the unpaired stimulus. From the proximal point of view, the diffuse light should elicit an excitatory reaction in group G+B- and an inhibitory reaction in group G-B+. Electrodermal conditioned anticipatory and omission responses were measured. The results supported the proximal hypothesis. Hence, defining input in environmental terms may be the wrong way. Instead, in conceptualizing the stimulus in conditioning, the following should be considered: the processing organism itself is creating the effective stimuli. PMID:3785989

  9. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  10. The Perception of Auditory Motion.

    Science.gov (United States)

    Carlile, Simon; Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  11. On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception.

    Directory of Open Access Journals (Sweden)

    Marion Cousineau

    Full Text Available Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR. "Neural Pitch Salience" (NPS measured from FFRs-essentially a time-domain equivalent of the classic pattern recognition models of pitch-has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.

  12. Auditory fMRI of Sound Intensity and Loudness for Unilateral Stimulation.

    Science.gov (United States)

    Behler, Oliver; Uppenkamp, Stefan

    2016-01-01

    We report a systematic exploration of the interrelation of sound intensity, ear of entry, individual loudness judgments, and brain activity across hemispheres, using auditory functional magnetic resonance imaging (fMRI). The stimuli employed were 4 kHz-bandpass filtered noise stimuli, presented monaurally to each ear at levels from 37 to 97 dB SPL. One diotic condition and a silence condition were included as control conditions. Normal hearing listeners completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound intensity and loudness estimates were analyzed by means of linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of sensory stimulation and perception. PMID:27080657

  13. Attention and response control in ADHD. Evaluation through integrated visual and auditory continuous performance test.

    Science.gov (United States)

    Moreno-García, Inmaculada; Delgado-Pardo, Gracia; Roldán-Blasco, Carmen

    2015-01-01

    This study assesses attention and response control through visual and auditory stimuli in a primary care pediatric sample. The sample consisted of 191 participants aged between 7 and 13 years old. It was divided into 2 groups: (a) 90 children with ADHD, according to diagnostic (DSM-IV-TR) (APA, 2002) and clinical (ADHD Rating Scale-IV) (DuPaul, Power, Anastopoulos, & Reid, 1998) criteria, and (b) 101 children without a history of ADHD. The aims were: (a) to determine and compare the performance of both groups in attention and response control, (b) to identify attention and response control deficits in the ADHD group. Assessments were carried out using the Integrated Visual and Auditory Continuous Performance Test (IVA/CPT, Sandford & Turner, 2002). Results showed that the ADHD group had visual and auditory attention deficits, F(3, 170) = 14.38; p ADHD showed inattention, mental processing speed deficits, and loss of concentration with visual stimuli. Both groups yielded a better performance in attention with auditory stimuli. PMID:25734571

  14. Listen to Yourself: The Medial Prefrontal Cortex Modulates Auditory Alpha Power During Speech Preparation.

    Science.gov (United States)

    Müller, Nadia; Leske, Sabine; Hartmann, Thomas; Szebényi, Szabolcs; Weisz, Nathan

    2015-11-01

    How do we process stimuli that stem from the external world and stimuli that are self-generated? In the case of voice perception it has been shown that evoked activity elicited by self-generated sounds is suppressed compared with the same sounds played-back externally. We here wanted to reveal whether neural excitability of the auditory cortex-putatively reflected in local alpha band power--is modulated already prior to speech onset, and which brain regions may mediate such a top-down preparatory response. In the left auditory cortex we show that the typical alpha suppression found when participants prepare to listen disappears when participants expect a self-spoken sound. This suggests an inhibitory adjustment of auditory cortical activity already before sound onset. As a second main finding we demonstrate that the medial prefrontal cortex, a region known for self-referential processes, mediates these condition-specific alpha power modulations. This provides crucial insights into how higher-order regions prepare the auditory cortex for the processing of self-generated sounds. Furthermore, the mechanism outlined could provide further explanations to self-referential phenomena, such as "tickling yourself". Finally, it has implications for the so-far unsolved question of how auditory alpha power is mediated by higher-order regions in a more general sense. PMID:24904068

  15. Attentional demands influence vocal compensations to pitch errors heard in auditory feedback.

    Science.gov (United States)

    Tumber, Anupreet K; Scheerer, Nichole E; Jones, Jeffery A

    2014-01-01

    Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself. PMID:25303649

  16. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  17. Music perception and cognition following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Tramo, M J; Bharucha, J J; Musiek, F E

    1990-01-01

    We present experimental and anatomical data from a case study of impaired auditory perception following bilateral hemispheric strokes. To consider the cortical representation of sensory, perceptual, and cognitive functions mediating tonal information processing in music, pure tone sensation thresholds, spectral intonation judgments, and the associative priming of spectral intonation judgments by harmonic context were examined, and lesion localization was analyzed quantitatively using straight-line two-dimensional maps of the cortical surface reconstructed from magnetic resonance images. Despite normal pure tone sensation thresholds at 250-8000 Hz, the perception of tonal spectra was severely impaired, such that harmonic structures (major triads) were almost uniformly judged to sound dissonant; yet, the associative priming of spectral intonation judgments by harmonic context was preserved, indicating that cognitive representations of tonal hierarchies in music remained intact and accessible. Brainprints demonstrated complete bilateral lesions of the transverse gyri of Heschl and partial lesions of the right and left superior temporal gyri involving 98 and 20% of their surface areas, respectively. In the right hemisphere, there was partial sparing of the planum temporale, temporoparietal junction, and inferior parietal cortex. In the left hemisphere, all of the superior temporal region anterior to the transverse gyrus and parts of the planum temporale, temporoparietal junction, inferior parietal cortex, and insula were spared. These observations suggest that (1) sensory, perceptual, and cognitive functions mediating tonal information processing in music are neurologically dissociable; (2) complete bilateral lesions of primary auditory cortex combined with partial bilateral lesions of auditory association cortex chronically impair tonal consonance perception; (3) cognitive functions that hierarchically structure pitch information and generate harmonic expectancies

  18. Motion-induced disturbance of auditory-motor synchronization and its modulation by transcranial direct current stimulation.

    Science.gov (United States)

    Ono, Kentaro; Mikami, Yusuke; Fukuyama, Hidenao; Mima, Tatsuya

    2016-02-01

    The timing of personal movement with respect to external events has previously been investigated using a synchronized finger-tapping task with a sequence of auditory or visual stimuli. While visuomotor synchronization is more accurate with moving stimuli than with stationary stimuli, it remains unclear whether the same principle holds true in the auditory domain. Although the right inferior-superior parietal lobe (IPL/SPL), a center of auditory motion processing, is expected to be involved in auditory-motor synchronization with moving sounds, its functional relevance has not yet been investigated. The aim of the present study was thus to clarify whether horizontal auditory motion affects the accuracy of finger-tapping synchronized with sounds, as well as whether the application of transcranial direct current stimulation (tDCS) to the right IPL/SPL affects this. Nineteen healthy right-handed participants performed a task in which tapping was synchronized with both stationary sounds and sounds that created apparent horizontal motion. This task was performed before and during anodal, cathodal and sham tDCS application to the right IPL/SPL in separate sessions. The time difference between the onset of the sounds and tapping was larger with apparently moving sounds than with stationary sounds. Cathodal tDCS decreased this difference, anodal tDCS increased the variance of the difference and sham stimulation had no effect. These results supported the hypothesis that auditory motion disturbs efficient auditory-motor synchronization and that the right IPL/SPL plays an important role in tapping in synchrony with moving sounds via auditory motion processing. PMID:26613559

  19. Binocular Combination of Second-Order Stimuli

    OpenAIRE

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F.

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on...

  20. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren;

    , and that there is a ‘race’ for selection and representation in visual short term memory (VSTM). In the basic TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of the letters (e.g., the red letters; partial report...... to be measured for auditory attention; providing insights into impaired auditory attention in old adults and neuropsychological patients, and allowing direct comparisons with visual attention. In the visual task, the stimuli are simultaneous, stationary (unchanging over time), and separated in space...

  1. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  2. Electrostimulation mapping of comprehension of auditory and visual words.

    Science.gov (United States)

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. PMID:26332785

  3. How does the extraction of local and global auditory regularities vary with context?

    Directory of Open Access Journals (Sweden)

    Sébastien Marti

    Full Text Available How does the human brain extract regularities from its environment? There is evidence that short range or 'local' regularities (within seconds are automatically detected by the brain while long range or 'global' regularities (over tens of seconds or more require conscious awareness. In the present experiment, we asked whether participants' attention was needed to acquire such auditory regularities, to detect their violation or both. We designed a paradigm in which participants listened to predictable sounds. Subjects could be distracted by a visual task at two moments: when they were first exposed to a regularity or when they detected violations of this regularity. MEG recordings revealed that early brain responses (100-130 ms to violations of short range regularities were unaffected by visual distraction and driven essentially by local transitional probabilities. Based on global workspace theory and prior results, we expected that visual distraction would eliminate the long range global effect, but unexpectedly, we found the contrary, i.e. late brain responses (300-600 ms to violations of long range regularities on audio-visual trials but not on auditory only trials. Further analyses showed that, in fact, visual distraction was incomplete and that auditory and visual stimuli interfered in both directions. Our results show that conscious, attentive subjects can learn the long range dependencies present in auditory stimuli even while performing a visual task on synchronous visual stimuli. Furthermore, they acquire a complex regularity and end up making different predictions for the very same stimulus depending on the context (i.e. absence or presence of visual stimuli. These results suggest that while short-range regularity detection is driven by local transitional probabilities between stimuli, the human brain detects and stores long-range regularities in a highly flexible, context dependent manner.

  4. Auditory profile and high resolution CT scan in autism spectrum disorders children with auditory hypersensitivity.

    Science.gov (United States)

    Thabet, Elsaeid M; Zaghloul, Hesham S

    2013-08-01

    Autism is the third most common developmental disorder, following mental retardationand cerebral palsy. ASD children have been described more often as beingpreoccupied with or agitated by noise. The aim of this study was to evaluate theprevalence and clinical significance of semicircular canal dehiscence detected on CTimages in ASD children with intolerance to loud sounds in an attempt to find ananatomical correlate with hyperacusis.14 ASD children with auditory hypersensitivity and 15 ASD children without auditoryhypersensitivity as control group age and gender matched were submitted to historytaking, otological examination, tympanometry and acoustic reflex thresholdmeasurement. ABR was done to validate normal peripheral hearing and integrity ofauditory brain stem pathway. High resolution CT scan petrous and temporal boneimaging was performed to all participated children. All participants had normal hearingsensitivity in ABR testing. Absolute ABR peak waves of I and III showed no statisticallysignificant difference between the two groups, while absolute wave V peak andinterpeak latencies I-V and III-V were shorter in duration in study group whencompared to the control group. CT scans revealed SSCD in 4 out of 14 of the studygroup (29%), the dehiscence was bilateral in one patient and unilateral in threepatients. None of control group showed SSCD. In conclusion, we have reportedevidence that apparent hypersensitivity to auditory stimuli (short conduction time in ABR) despite the normal physiological measures in ASD children with auditoryhypersensitivity can provide a clinical clue of a possible SSCD. PMID:23580033

  5. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten;

    2011-01-01

    Hierarchical (or mesoporous) zeolites have attracted significant attention during the first decade of the 21st century, and so far this interest continues to increase. There have already been several reviews giving detailed accounts of the developments emphasizing different aspects of this resear...

  6. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  7. Tight bifunctional hierarchical catalyst.

    Science.gov (United States)

    Højholt, Karen T; Vennestrøm, Peter N R; Tiruvalam, Ramchandra; Beato, Pablo

    2011-12-28

    A new concept to prepare tight bifunctional catalysts has been developed, by anchoring CoMo(6) clusters on hierarchical ZSM-5 zeolites for simultaneous use in HDS and hydrocracking catalysis. The prepared material displays a significant improved activity in HDS catalysis compared to the impregnated counterpart. PMID:22048337

  8. Hierarchically Acting Sterile Neutrinos

    OpenAIRE

    Chen, Chian-Shu(Physics Division, National Center for Theoretical Sciences, Hsinchu, 300, Taiwan); Takahashi, Ryo

    2011-01-01

    We propose that a hierarchical spectrum of sterile neutrinos (eV, keV, $10^{13-15}$ GeV) is considered to as the explanations for MiniBooNE and LSND oscillation anomalies, dark matter, and baryon asymmetry of the universe (BAU) respectively. The scenario can also realize the smallness of active neutrino masses by seesaw mechanism.

  9. Rapid context-based identification of target sounds in an auditory scene

    Science.gov (United States)

    Gamble, Marissa L.; Woldorff, Marty G.

    2015-01-01

    To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. While it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff (2014) recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning ~60 ms) between feature-deviating Target stimuli and physically equivalent feature-deviating Nontargets, reflecting a rapid Target-detection process. This was followed shortly later (~130 ms) by the lateralized N2ac ERP activation, reflecting the focusing of auditory spatial attention toward the Target sound and paralleling attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, Target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included Target and Nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the Target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for subjects with faster response times. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a Target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound. PMID:25848684

  10. Functional MR imaging of cerebral auditory cortex with linguistic and non-linguistic stimulation: preliminary study

    International Nuclear Information System (INIS)

    To obtain preliminary data for understanding the central auditory neural pathway by means of functional MR imaging (fMRI) of the cerebral auditory cortex during linguistic and non-linguistic auditory stimulation. In three right-handed volunteers we conducted fMRI of auditory cortex stimulation at 1.5 T using a conventional gradient-echo technique (TR/TE/flip angle: 80/60/40 deg). Using a pulsed tone of 1000 Hz and speech as non-linguistic and linguistic auditory stimuli, respectively, images-including those of the superior temporal gyrus of both hemispheres-were obtained in sagittal plases. Both stimuli were separately delivered binaurally or monoaurally through a plastic earphone. Images were activated by processing with homemade software. In order to analyze patterns of auditory cortex activation according to type of stimulus and which side of the ear was stimulated, the number and extent of activated pixels were compared between both temporal lobes. Biaural stimulation led to bilateral activation of the superior temporal gyrus, while monoaural stimulation led to more activation in the contralateral temporal lobe than in the ipsilateral. A trend toward slight activation of the left (dominant) temporal lobe in ipsilateral stimulation, particularly with a linguistic stimulus, was observed. During both biaural and monoaural stimulation, a linguistic stimulus produced more widespread activation than did a non-linguistic one. The superior temporal gyri of both temporal lobes are associated with acoustic-phonetic analysis, and the left (dominant) superior temporal gyrus is likely to play a dominant role in this processing. For better understanding of physiological and pathological central auditory pathways, further investigation is needed

  11. Enhanced audio-visual interactions in the auditory cortex of elderly cochlear-implant users.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Schulte, Svenja; Hauthal, Nadine; Kantzke, Christoph; Rach, Stefan; Büchner, Andreas; Dengler, Reinhard; Sandmann, Pascale

    2015-10-01

    Auditory deprivation and the restoration of hearing via a cochlear implant (CI) can induce functional plasticity in auditory cortical areas. How these plastic changes affect the ability to integrate combined auditory (A) and visual (V) information is not yet well understood. In the present study, we used electroencephalography (EEG) to examine whether age, temporary deafness and altered sensory experience with a CI can affect audio-visual (AV) interactions in post-lingually deafened CI users. Young and elderly CI users and age-matched NH listeners performed a speeded response task on basic auditory, visual and audio-visual stimuli. Regarding the behavioral results, a redundant signals effect, that is, faster response times to cross-modal (AV) than to both of the two modality-specific stimuli (A, V), was revealed for all groups of participants. Moreover, in all four groups, we found evidence for audio-visual integration. Regarding event-related responses (ERPs), we observed a more pronounced visual modulation of the cortical auditory response at N1 latency (approximately 100 ms after stimulus onset) in the elderly CI users when compared with young CI users and elderly NH listeners. Thus, elderly CI users showed enhanced audio-visual binding which may be a consequence of compensatory strategies developed due to temporary deafness and/or degraded sensory input after implantation. These results indicate that the combination of aging, sensory deprivation and CI facilitates the coupling between the auditory and the visual modality. We suggest that this enhancement in multisensory interactions could be used to optimize auditory rehabilitation, especially in elderly CI users, by the application of strong audio-visually based rehabilitation strategies after implant switch-on. PMID:26302946

  12. Auditory-Verbal Comprehension Development of 2-5 Year Old Normal Persian Speaking Children in Tehran, Iran

    Directory of Open Access Journals (Sweden)

    Fariba Yadegari

    2011-06-01

    Full Text Available Background and Aim: Understanding and defining developmental norms of auditory comprehension is a necessity for detecting auditory-verbal comprehension impairments in children. We hereby investigated lexical auditory development of Persian (Farsi speaking children.Methods: In this cross-sectional study, auditory comprehension of four 2-5 year old normal children of adult’s child-directed utterance at available nurseries was observed by researchers primarily to gain a great number of comprehendible words for the children of the same age. The words were classified into nouns, verbs and adjectives. Auditory-verbal comprehension task items were also considered in 2 sections of subordinates and superordinates auditory comprehension. Colored pictures were provided for each item. Thirty 2-5 year old normal children were randomly selected from nurseries all over Tehran. Children were tested by this task and subsequently, mean of their correct response were analyzed. Results: The findings revealed that there is a high positive correlation between auditory-verbal comprehension and age (r=0.804, p=0.001. Comparing children in 3 age groups of 2-3, 3-4 and 4-5 year old, showed that subordinate and superordinate auditory comprehension of the former group is significantly lower (p0.05, while the difference between subordinate and superordinate auditory comprehension was significant in all age groups (p<0.05.Conclusion: Auditory-verbal comprehension develop much faster at lower than older ages and there is no prominent difference between word linguistic classes including nouns, verbs and adjectives. Slower development of superordinate auditory comprehension implies semantic hierarchical evolution of words.

  13. Auditory perspective taking.

    Science.gov (United States)

    Martinson, Eric; Brock, Derek

    2013-06-01

    Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077

  14. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  15. Tactile feedback improves auditory spatial localization

    OpenAIRE

    Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...

  16. Auditory and non-auditory effects of noise on health

    OpenAIRE

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-aud...

  17. The power of auditory-motor synchronization in sports: enhancing running performance by coupling cadence with the right beats.

    Directory of Open Access Journals (Sweden)

    Robert Jan Bood

    Full Text Available Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1 a control condition without acoustic stimuli, 2 a metronome condition with a sequence of beeps matching participants' cadence (synchronization, and 3 a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation. Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps. These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by

  18. Behavioral Distraction by Auditory Novelty Is Not Only about Novelty: The Role of the Distracter's Informational Value

    Science.gov (United States)

    Parmentier, Fabrice B. R.; Elsley, Jane V.; Ljungberg, Jessica K.

    2010-01-01

    Unexpected events often distract us. In the laboratory, novel auditory stimuli have been shown to capture attention away from a focal visual task and yield specific electrophysiological responses as well as a behavioral cost to performance. Distraction is thought to follow ineluctably from the sound's low probability of occurrence or, put more…

  19. A real-time detector system for precise timing of audiovisual stimuli.

    Science.gov (United States)

    Henelius, Andreas; Jagadeesan, Sharman; Huotilainen, Minna

    2012-01-01

    The successful recording of neurophysiologic signals, such as event-related potentials (ERPs) or event-related magnetic fields (ERFs), relies on precise information of stimulus presentation times. We have developed an accurate and flexible audiovisual sensor solution operating in real-time for on-line use in both auditory and visual ERP and ERF paradigms. The sensor functions independently of the used audio or video stimulus presentation tools or signal acquisition system. The sensor solution consists of two independent sensors; one for sound and one for light. The microcontroller-based audio sensor incorporates a novel approach to the detection of natural sounds such as multipart audio stimuli, using an adjustable dead time. This aids in producing exact markers for complex auditory stimuli and reduces the number of false detections. The analog photosensor circuit detects changes in light intensity on the screen and produces a marker for changes exceeding a threshold. The microcontroller software for the audio sensor is free and open source, allowing other researchers to customise the sensor for use in specific auditory ERP/ERF paradigms. The hardware schematics and software for the audiovisual sensor are freely available from the webpage of the authors' lab. PMID:23365952

  20. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  1. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  2. Regional brain responses in nulliparous women to emotional infant stimuli.

    Science.gov (United States)

    Montoya, Jessica L; Landi, Nicole; Kober, Hedy; Worhunsky, Patrick D; Rutherford, Helena J V; Mencl, W Einar; Mayes, Linda C; Potenza, Marc N

    2012-01-01

    Infant cries and facial expressions influence social interactions and elicit caretaking behaviors from adults. Recent neuroimaging studies suggest that neural responses to infant stimuli involve brain regions that process rewards. However, these studies have yet to investigate individual differences in tendencies to engage or withdraw from motivationally relevant stimuli. To investigate this, we used event-related fMRI to scan 17 nulliparous women. Participants were presented with novel infant cries of two distress levels (low and high) and unknown infant faces of varying affect (happy, sad, and neutral) in a randomized, counter-balanced order. Brain activation was subsequently correlated with scores on the Behavioral Inhibition System/Behavioral Activation System scale. Infant cries activated bilateral superior and middle temporal gyri (STG and MTG) and precentral and postcentral gyri. Activation was greater in bilateral temporal cortices for low- relative to high-distress cries. Happy relative to neutral faces activated the ventral striatum, caudate, ventromedial prefrontal, and orbitofrontal cortices. Sad versus neutral faces activated the precuneus, cuneus, and posterior cingulate cortex, and behavioral activation drive correlated with occipital cortical activations in this contrast. Behavioral inhibition correlated with activation in the right STG for high- and low-distress cries relative to pink noise. Behavioral drive correlated inversely with putamen, caudate, and thalamic activations for the comparison of high-distress cries to pink noise. Reward-responsiveness correlated with activation in the left precentral gyrus during the perception of low-distress cries relative to pink noise. Our findings indicate that infant cry stimuli elicit activations in areas implicated in auditory processing and social cognition. Happy infant faces may be encoded as rewarding, whereas sad faces activate regions associated with empathic processing. Differences in motivational

  3. Regional brain responses in nulliparous women to emotional infant stimuli.

    Directory of Open Access Journals (Sweden)

    Jessica L Montoya

    Full Text Available Infant cries and facial expressions influence social interactions and elicit caretaking behaviors from adults. Recent neuroimaging studies suggest that neural responses to infant stimuli involve brain regions that process rewards. However, these studies have yet to investigate individual differences in tendencies to engage or withdraw from motivationally relevant stimuli. To investigate this, we used event-related fMRI to scan 17 nulliparous women. Participants were presented with novel infant cries of two distress levels (low and high and unknown infant faces of varying affect (happy, sad, and neutral in a randomized, counter-balanced order. Brain activation was subsequently correlated with scores on the Behavioral Inhibition System/Behavioral Activation System scale. Infant cries activated bilateral superior and middle temporal gyri (STG and MTG and precentral and postcentral gyri. Activation was greater in bilateral temporal cortices for low- relative to high-distress cries. Happy relative to neutral faces activated the ventral striatum, caudate, ventromedial prefrontal, and orbitofrontal cortices. Sad versus neutral faces activated the precuneus, cuneus, and posterior cingulate cortex, and behavioral activation drive correlated with occipital cortical activations in this contrast. Behavioral inhibition correlated with activation in the right STG for high- and low-distress cries relative to pink noise. Behavioral drive correlated inversely with putamen, caudate, and thalamic activations for the comparison of high-distress cries to pink noise. Reward-responsiveness correlated with activation in the left precentral gyrus during the perception of low-distress cries relative to pink noise. Our findings indicate that infant cry stimuli elicit activations in areas implicated in auditory processing and social cognition. Happy infant faces may be encoded as rewarding, whereas sad faces activate regions associated with empathic processing. Differences

  4. Sistema auditivo eferente: efeito no processamento auditivo Efferent auditory system: its effect on auditory processing

    Directory of Open Access Journals (Sweden)

    Fernanda Acaui Ribeiro Burguetti

    2008-10-01

    Full Text Available O processamento da informação sonora depende da integridade das vias auditivas aferentes e eferentes. O sistema auditivo eferente pode ser avaliado por meio dos reflexos acústicos e da supressão das emissões otoacústicas. OBJETIVO: Verificar a atividade do sistema auditivo eferente, por meio da supressão das emissões otoacústicas (EOA e da sensibilização do reflexo acústico no distúrbio de processamento auditivo. CASUÍSTICA E MÉTODO: Estudo prospectivo: 50 crianças com alteração de processamento auditivo (grupo estudo e 38 sem esta alteração (grupo controle, avaliadas por meio das EOA na ausência e presença de ruído contralateral e da pesquisa dos limiares do reflexo acústico na ausência e presença de estímulo facilitador contralateral. RESULTADOS: O valor médio da supressão das EOA foi de até 1,50 dB para o grupo controle e de até 1,26 dB para o grupo estudo. O valor médio da sensibilização dos reflexos foi de até 14,60 dB para o grupo estudo e de até 15,21 dB para o grupo controle. Não houve diferença estatisticamente significante entre as respostas dos grupos controle e estudo em ambos os procedimentos. CONCLUSÃO: O grupo estudo apresentou valores reduzidos na supressão das EOA e valores aumentados na sensibilização do reflexo acústico, em relação ao grupo controle.Auditory processing depends on afferent and efferent auditory pathways integrity. The efferent auditory system may be assessed in humans by two non-invasive and objective methods: acoustic reflex and otoacoustic emissions suppression. AIM: Analyze the efferent auditory system activity by otoacoustic emission suppression and acoustic reflex sensitization in human subjects with auditory processing disorders. METHOD: Prospective study: fifty children with auditory processing disorders (study group and thirty-eight children without auditory processing disorders (control group were evaluated using otoacoustic emission with and without

  5. Norepinephrine is necessary for experience-dependent plasticity in the developing mouse auditory cortex.

    Science.gov (United States)

    Shepard, Kathryn N; Liles, L Cameron; Weinshenker, David; Liu, Robert C

    2015-02-11

    Critical periods are developmental windows during which the stimuli an animal encounters can reshape response properties in the affected system to a profound degree. Despite this window's importance, the neural mechanisms that regulate it are not completely understood. Pioneering studies in visual cortex initially indicated that norepinephrine (NE) permits ocular dominance column plasticity during the critical period, but later research has suggested otherwise. More recent work implicating NE in experience-dependent plasticity in the adult auditory cortex led us to re-examine the role of NE in critical period plasticity. Here, we exposed dopamine β-hydroxylase knock-out (Dbh(-/-)) mice, which lack NE completely from birth, to a biased acoustic environment during the auditory cortical critical period. This manipulation led to a redistribution of best frequencies (BFs) across auditory cortex in our control mice, consistent with prior work. By contrast, Dbh(-/-) mice failed to exhibit the expected redistribution of BFs, even though NE-deficient and NE-competent mice showed comparable auditory cortical organization when reared in a quiet colony environment. These data suggest that while intrinsic tonotopic patterning of auditory cortical circuitry occurs independently from NE, NE is required for critical period plasticity in auditory cortex. PMID:25673838

  6. Auditory-olfactory integration: congruent or pleasant sounds amplify odor pleasantness.

    Science.gov (United States)

    Seo, Han-Seok; Hummel, Thomas

    2011-03-01

    Even though we often perceive odors while hearing auditory stimuli, surprisingly little is known about auditory-olfactory integration. This study aimed to investigate the influence of auditory cues on ratings of odor intensity and/or pleasantness, with a focus on 2 factors: "congruency" (Experiment 1) and the "halo/horns effect" of auditory pleasantness (Experiment 2). First, in Experiment 1, participants were presented with congruent, incongruent, or neutral sounds before and during the presentation of odor. Participants rated the odors as being more pleasant while listening to a congruent sound than while listening to an incongruent sound. In Experiment 2, participants received pleasant or unpleasant sounds before and during the presentation of either a pleasant or unpleasant odor. The hedonic valence of the sounds transferred to the odors, irrespective of the hedonic tone of the odor itself. The more the participants liked the preceding sound, the more pleasant the subsequent odor became. In contrast, the ratings of odor intensity appeared to be little or not at all influenced by the congruency or hedonic valence of the auditory cue. In conclusion, the present study for the first time provides an empirical demonstration that auditory cues can modulate odor pleasantness. PMID:21163913

  7. Eye Movements during Auditory Attention Predict Individual Differences in Dorsal Attention Network Activity

    Science.gov (United States)

    Braga, Rodrigo M.; Fu, Richard Z.; Seemungal, Barry M.; Wise, Richard J. S.; Leech, Robert

    2016-01-01

    The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information. PMID:27242465

  8. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  9. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...

  10. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...

  11. Mechanisms and streams for processing of “what” and “where” in auditory cortex

    OpenAIRE

    Rauschecker, Josef P; Tian, Biao

    2000-01-01

    The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specifi...

  12. The shape of ears to come: dynamic coding of auditory space.

    Science.gov (United States)

    King, A J.; Schnupp, J W.H.; Doubell, T P.

    2001-06-01

    In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location. PMID:11390297

  13. Deterministic hierarchical networks

    Science.gov (United States)

    Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.

    2016-06-01

    It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.

  14. Nested Hierarchical Dirichlet Processes.

    Science.gov (United States)

    Paisley, John; Wang, Chong; Blei, David M; Jordan, Michael I

    2015-02-01

    We develop a nested hierarchical Dirichlet process (nHDP) for hierarchical topic modeling. The nHDP generalizes the nested Chinese restaurant process (nCRP) to allow each word to follow its own path to a topic node according to a per-document distribution over the paths on a shared tree. This alleviates the rigid, single-path formulation assumed by the nCRP, allowing documents to easily express complex thematic borrowings. We derive a stochastic variational inference algorithm for the model, which enables efficient inference for massive collections of text documents. We demonstrate our algorithm on 1.8 million documents from The New York Times and 2.7 million documents from Wikipedia. PMID:26353240

  15. Sound and Music in Narrative Multimedia : A macroscopic discussion of audiovisual relations and auditory narrative functions in film, television and video games

    OpenAIRE

    2012-01-01

    This thesis examines how we perceive an audiovisual narrative - here defined as film, television and video games - and seeks to establish a descriptive framework for auditory stimuli and their narrative functions in this regard. I initially adopt the viewpoint of cognitive psychology an account for basic information processing operations. I then discuss audiovisual perception in terms of the effects of sensory integration between the visual and auditory modalities on the construction of meani...

  16. Hierarchical Work-Stealing

    OpenAIRE

    Quintin, Jean-Noel; Wagner, Frédéric

    2009-01-01

    In this paper, we study the problem of dynamic load-balancing on heterogeneous hierarchical platforms. In particular, we consider here applications involving heavy communications on a distributed platform. The work-stealing algorithm introduced by Blumofe and Leiserson is a commonly used technique to distribute load in a distributed environment but it suffers from poor performances in some cases of communications-intensive applications. We present here several variants of this algorithm found...

  17. Hierarchical Reinforcement Learning

    OpenAIRE

    Borga, Magnus

    1993-01-01

    A hierarchical representation of the input-output transition function in a learning system is suggested. The choice of either representing the knowledge in a learning system as a discrete set of input-output pairs or as a continuous input-output transition function is discussed. The conclusion that both representations could be efficient, but at different levels of abstraction is made. The difference between strategies and actions is defined. An algorithm for using adaptive critic methods in ...

  18. Long-term memory of hierarchical relationships in free-living greylag geese

    NARCIS (Netherlands)

    Weiss, Brigitte M.; Scheiber, Isabella B. R.

    2013-01-01

    Animals may memorise spatial and social information for many months and even years. Here, we investigated long-term memory of hierarchically ordered relationships, where the position of a reward depended on the relationship of a stimulus relative to other stimuli in the hierarchy. Seventeen greylag

  19. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... time. Preliminary results are also given for an experiment using stimuli designed to compensate for OAE delays. These were designed to try and reproduce the success of similar stimuli now used routinely to improve ABR signal-to-noise ratio....

  20. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  1. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    Directory of Open Access Journals (Sweden)

    David B. Stone

    2014-11-01

    Full Text Available Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46 and healthy controls (N = 57 using magnetoencephalography (MEG. Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia.

  2. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    SygalAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  3. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  4. Effect of Infant Prematurity on Auditory Brainstem Response at Preschool Age

    Directory of Open Access Journals (Sweden)

    Sara Hasani

    2013-03-01

    Full Text Available Introduction: Preterm birth is a risk factor for a number of conditions that requires comprehensive examination. Our study was designed to investigate the impact of preterm birth on the processing of auditory stimuli and brain structures at the brainstem level at a preschool age.   Materials and Methods: An auditory brainstem response (ABR test was performed with low rates of stimuli in 60 children aged 4 to 6 years. Thirty subjects had been born following a very preterm labor or late-preterm labor and 30 control subjects had been born following a full-term labor.   Results: Significant differences in the ABR test result were observed in terms of the inter-peak intervals of the I–III and III–V waves, and the absolute latency of the III wave (P

  5. Evolutionary adaptations for the temporal processing of natural sounds by the anuran peripheral auditory system.

    Science.gov (United States)

    Schrode, Katrina M; Bee, Mark A

    2015-03-01

    Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467

  6. A longitudinal study of auditory evoked field and language development in young children.

    Science.gov (United States)

    Yoshimura, Yuko; Kikuchi, Mitsuru; Ueno, Sanae; Shitamichi, Kiyomi; Remijn, Gerard B; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Furutani, Naoki; Oi, Manabu; Munesue, Toshio; Tsubokawa, Tsunehisa; Higashida, Haruhiro; Minabe, Yoshio

    2014-11-01

    The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children. PMID:25067819

  7. Auditory Perceptual Learning for Speech Perception Can Be Enhanced by Audiovisual Training

    Directory of Open Access Journals (Sweden)

    LynneEBernstein

    2013-03-01

    Full Text Available Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA audiovisual (AV training of one group of participants was compared with audio-only (AO training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct. PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

  8. Voiced-speech representation by an analog silicon model of the auditory periphery.

    Science.gov (United States)

    Liu, W; Andreou, A G; Goldstein, M H

    1992-01-01

    An analog CMOS integration of a model for the auditory periphery is presented. The model consists of middle ear, basilar membrane, and hair cell/synapse modules which are derived from neurophysiological studies. The circuit realization of each module is discussed, and experimental data of each module's response to sinusoidal excitation are given. The nonlinear speech processing capabilities of the system are demonstrated using the voiced syllable |ba|. The multichannel output of the silicon model corresponds to the time-varying instantaneous firing rates of auditory nerve fibers that have different characteristic frequencies. These outputs are similar to the physiologically obtained responses. The actual implementation uses subthreshold CMOS technology and analog continuous-time circuits, resulting in a real-time, micropower device with potential applications as a preprocessor of auditory stimuli. PMID:18276451

  9. Amplification in the auditory periphery: The effect of coupling tuning mechanisms

    Science.gov (United States)

    Montgomery, K. A.; Silber, M.; Solla, S. A.

    2007-05-01

    A mathematical model describing the coupling between two independent amplification mechanisms in auditory hair cells is proposed and analyzed. Hair cells are cells in the inner ear responsible for translating sound-induced mechanical stimuli into an electrical signal that can then be recorded by the auditory nerve. In nonmammals, two separate mechanisms have been postulated to contribute to the amplification and tuning properties of the hair cells. Models of each of these mechanisms have been shown to be poised near a Hopf bifurcation. Through a weakly nonlinear analysis that assumes weak periodic forcing, weak damping, and weak coupling, the physiologically based models of the two mechanisms are reduced to a system of two coupled amplitude equations describing the resonant response. The predictions that follow from an analysis of the reduced equations, as well as performance benefits due to the coupling of the two mechanisms, are discussed and compared with published experimental auditory nerve data.

  10. Multimodal information Management: Evaluation of Auditory and Haptic Cues for NextGen Communication Displays

    Science.gov (United States)

    Begault, Durand R.; Bittner, Rachel M.; Anderson, Mark R.

    2012-01-01

    Auditory communication displays within the NextGen data link system may use multiple synthetic speech messages replacing traditional ATC and company communications. The design of an interface for selecting amongst multiple incoming messages can impact both performance (time to select, audit and release a message) and preference. Two design factors were evaluated: physical pressure-sensitive switches versus flat panel "virtual switches", and the presence or absence of auditory feedback from switch contact. Performance with stimuli using physical switches was 1.2 s faster than virtual switches (2.0 s vs. 3.2 s); auditory feedback provided a 0.54 s performance advantage (2.33 s vs. 2.87 s). There was no interaction between these variables. Preference data were highly correlated with performance.

  11. Vibration-induced auditory-cortex activation in a congenitally deaf adult.

    Science.gov (United States)

    Levänen, S; Jousmäki, V; Hari, R

    1998-07-16

    Considerable changes take place in the number of cerebral neurons, synapses and axons during development, mainly as a result of competition between different neural activities [1-4]. Studies using animals suggest that when input from one sensory modality is deprived early in development, the affected neural structures have the potential to mediate functions for the remaining modalities [5-8]. We now show that similar potential exists in the human auditory system: vibrotactile stimuli, applied on the palm and fingers of a congenitally deaf adult, activated his auditory cortices. The recorded magnetoencephalographic (MEG) signals also indicated that the auditory cortices were able to discriminate between the applied 180 Hz and 250 Hz vibration frequencies. Our findings suggest that human cortical areas, normally subserving hearing, may process vibrotactile information in the congenitally deaf. PMID:9705933

  12. The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues.

    Science.gov (United States)

    Chuen, Lorraine; Schutz, Michael

    2016-07-01

    An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities. PMID:27084701

  13. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301

  14. Detection, discrimination, and sensation of visceral stimuli

    OpenAIRE

    Hölzl, Rupert; Erasmus, Lutz-Peter; Möltner, Andreas; Samay, Sebastian; Waldmann, Hans-Christian; Neidig, Claus W.

    1994-01-01

    Examines the interoception of gastrointestinal stimuli. A total of 48 subjects participated in the study that used an adaptive up-down tracking method of threshold determination of distensions to the colon wall. Subjects were presented with two temporal intervals, and the stimulus was applied to one of the intervals. Then they were required to give behavioral and subjective responses to perceived distension stimuli in the lower bowel segments. It is concluded that detection of stimuli is poss...

  15. Altered Neural Responses to Sounds in Primate Primary Auditory Cortex during Slow-Wave Sleep

    OpenAIRE

    Issa, Elias B.; Wang, Xiaoqin

    2011-01-01

    How sounds are processed by the brain during sleep is an important question for understanding how we perceive the sensory environment in this unique behavioral state. While human behavioral data have indicated selective impairments of sound processing during sleep, brain imaging and neurophysiology studies have reported that overall neural activity in auditory cortex during sleep is surprisingly similar to that during wakefulness. This responsiveness to external stimuli leaves open the questi...

  16. Increased Signal Complexity Improves the Breadth of Generalization in Auditory Perceptual Learning

    OpenAIRE

    Brown, David J.; Proulx, Michael J.

    2013-01-01

    Perceptual learning can be specific to a trained stimulus or optimally generalized to novel stimuli with the breadth of generalization being imperative for how we structure perceptual training programs. Adapting an established auditory interval discrimination paradigm to utilise complex signals, we trained human adults on a standard interval for either 2, 4, or 10 days. We then tested the standard, alternate frequency, interval, and stereo input conditions to evaluate the rapidity of specifi...

  17. Influence of cortical descending pathways on neuronal adaptation in the auditory midbrain

    OpenAIRE

    Robinson, B. L.

    2014-01-01

    Adaptation of the spike rate of sensory neurones is associated with alteration in neuronal representation of a wide range of stimuli, including sound level, visual contrast, and whisker vibrissa motion. In the inferior colliculus (IC) of the auditory midbrain, adaptation may allow neurones to adjust their limited representational range to match the current range of sound levels in the environment. Two outstanding questions concern the rapidity of this adaptation in IC, and the mechanisms unde...

  18. Near-infrared spectroscopic imaging of stimulus-related hemodynamic responses on the neonatal auditory cortices

    Science.gov (United States)

    Kotilahti, Kalle; Nissila, Ilkka; Makela, Riikka; Noponen, Tommi; Lipiainen, Lauri; Gavrielides, Nasia; Kajava, Timo; Huotilainen, Minna; Fellman, Vineta; Merilainen, Pekka; Katila, Toivo

    2005-04-01

    We have used near-infrared spectroscopy (NIRS) to study hemodynamic auditory evoked responses on 7 full-term neonates. Measurements were done simultaneously above both auditory cortices to study the distribution of speech and music processing between hemispheres using a 16-channel frequency-domain instrument. The stimulation consisted of 5-second samples of music and speech with a 25-second silent interval. In response to stimulation, a significant increase in the concentration of oxygenated hemoglobin ([HbO2]) was detected in 6 out of 7 subjects. The strongest responses in [HbO2] were seen near the measurement location above the ear on both hemispheres. The mean latency of the maximum responses was 9.42+/-1.51 s. On the left hemisphere (LH), the maximum amplitude of the average [HbO2] response to the music stimuli was 0.76+/- 0.38 μ M (mean+/-std.) and to the speech stimuli 1.00+/- 0.45 μ+/- μM. On the right hemisphere (RH), the maximum amplitude of the average [HbO2] response was 1.29+/- 0.85 μM to the music stimuli and 1.23+/- 0.93 μM to the speech stimuli. The results indicate that auditory information is processed on both auditory cortices, but LH is more concentrated to process speech than music information. No significant differences in the locations and the latencies of the maximum responses relative to the stimulus type were found.

  19. Neural coding and perception of pitch in the normal and impaired human auditory system

    OpenAIRE

    Santurette, Sébastien; Dau, Torsten; Buchholz, Jörg; Wouters, Jan; Andrew J Oxenham

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifica...

  20. Polarity sensitivity of the electrically stimulated auditory nerve at different cochlear sites

    OpenAIRE

    Undurraga Lucero, Jaime; van Wieringen, Astrid; Carlyon, Robert P.; Macherey, Olivier; Wouters, Jan

    2009-01-01

    Commercially available cochlear implants (CIs) stimulate the auditory nerve (AN) using symmetric biphasic current (BP) pulses. However, recent data have shown that the use of asymmetric pulse shapes could be beneficial in terms of reducing power consumption, increasing dynamic range and limiting channel interactions. In these charge-balanced stimuli, the effectiveness of one phase (one polarity) is reduced by making it longer and lower in amplitude than the other. For the design of novel CI s...

  1. Improved Electrically Evoked Auditory Steady-State Response Thresholds in Humans

    OpenAIRE

    Hofmann, Michael; Wouters, Jan

    2012-01-01

    Electrically evoked auditory steady-state responses (EASSRs) are EEG potentials in response to periodic electrical stimuli presented through a cochlear implant. For low-rate pulse trains in the 40-Hz range, electrophysiological thresholds derived from response amplitude growth functions correlate well with behavioral T levels at these rates. The aims of this study were: (1) to improve the correlation between electrophysiological thresholds and behavioral T levels at 900 pps by using amplitude...

  2. Spectral vs. temporal auditory processing in Specific Language Impairment: A developmental ERP study

    OpenAIRE

    Čeponienė, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A; Townsend, J.

    2009-01-01

    Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental Language Impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n=25 per group). The stimuli were 3 CV syllables and 3 consonant-to-vow...

  3. The nature of auditory discrimination problems in children with specific language impairment: An MMN study

    OpenAIRE

    2011-01-01

    Many children with Specific Language Impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied by using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in an active behavioral task and a passive ERP paradigm, known to elicit the mismatch negativity (MMN). In addition, attention skills and a variety ...

  4. Brainstem encoding of speech and musical stimuli in congenital amusia: Evidence from Cantonese speakers

    Directory of Open Access Journals (Sweden)

    Fang Liu

    2015-01-01

    Full Text Available Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB, and two cello tones in quiet while their frequency-following responses (FFRs to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain.

  5. Visual Task Demands and the Auditory Mismatch Negativity: An Empirical Study and a Meta-Analysis

    Science.gov (United States)

    Wiens, Stefan; Szychowska, Malina; Nilsson, Mats E.

    2016-01-01

    Because the auditory system is particularly useful in monitoring the environment, previous research has examined whether task-irrelevant, auditory distracters are processed even if subjects focus their attention on visual stimuli. This research suggests that attentionally demanding visual tasks decrease the auditory mismatch negativity (MMN) to simultaneously presented auditory distractors. Because a recent behavioral study found that high visual perceptual load decreased detection sensitivity of simultaneous tones, we used a similar task (n = 28) to determine if high visual perceptual load would reduce the auditory MMN. Results suggested that perceptual load did not decrease the MMN. At face value, these nonsignificant findings may suggest that effects of perceptual load on the MMN are smaller than those of other demanding visual tasks. If so, effect sizes should differ systematically between the present and previous studies. We conducted a selective meta-analysis of published studies in which the MMN was derived from the EEG, the visual task demands were continuous and varied between high and low within the same task, and the task-irrelevant tones were presented in a typical oddball paradigm simultaneously with the visual stimuli. Because the meta-analysis suggested that the present (null) findings did not differ systematically from previous findings, the available evidence was combined. Results of this meta-analysis confirmed that demanding visual tasks reduce the MMN to auditory distracters. However, because the meta-analysis was based on small studies and because of the risk for publication biases, future studies should be preregistered with large samples (n > 150) to provide confirmatory evidence for the results of the present meta-analysis. These future studies should also use control conditions that reduce confounding effects of neural adaptation, and use load manipulations that are defined independently from their effects on the MMN. PMID:26741815

  6. Impact of olfactory and auditory priming on the attraction to foods with high energy density.

    Science.gov (United States)

    Chambaron, S; Chisin, Q; Chabanet, C; Issanchou, S; Brand, G

    2015-12-01

    \\]\\Recent research suggests that non-attentively perceived stimuli may significantly influence consumers' food choices. The main objective of the present study was to determine whether an olfactory prime (a sweet-fatty odour) and a semantic auditory prime (a nutritional prevention message), both presented incidentally, either alone or in combination can influence subsequent food choices. The experiment included 147 participants who were assigned to four different conditions: a control condition, a scented condition, an auditory condition or an auditory-scented condition. All participants remained in the waiting room during15 min while they performed a 'lure' task. For the scented condition, the participants were unobtrusively exposed to a 'pain au chocolat' odour. Those in the auditory condition were exposed to an audiotape including radio podcasts and a nutritional message. A third group of participants was exposed to both olfactory and auditory stimuli simultaneously. In the control condition, no stimulation was given. Following this waiting period, all participants moved into a non-odorised test room where they were asked to choose, from dishes served buffet-style, the starter, main course and dessert that they would actually eat for lunch. The results showed that the participants primed with the odour of 'pain au chocolat' tended to choose more desserts with high energy density (i.e., a waffle) than the participants in the control condition (p = 0.06). Unexpectedly, the participants primed with the nutritional auditory message chose to consume more desserts with high energy density than the participants in the control condition (p = 0.03). In the last condition (odour and nutritional message), they chose to consume more desserts with high energy density than the participants in the control condition (p = 0.01), and the data reveal an additive effect of the two primes. PMID:26119807

  7. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  8. Atypical auditory refractory periods in children from lower socio-economic status backgrounds: ERP evidence for a role of selective attention.

    Science.gov (United States)

    Stevens, Courtney; Paulsen, David; Yasen, Alia; Neville, Helen

    2015-02-01

    Previous neuroimaging studies indicate that lower socio-economic status (SES) is associated with reduced effects of selective attention on auditory processing. Here, we investigated whether lower SES is also associated with differences in a stimulus-driven aspect of auditory processing: the neural refractory period, or reduced amplitude response at faster rates of stimulus presentation. Thirty-two children aged 3 to 8 years participated, and were divided into two SES groups based on maternal education. Event-related brain potentials were recorded to probe stimuli presented at interstimulus intervals (ISIs) of 200, 500, or 1000 ms. These probes were superimposed on story narratives when attended and ignored, permitting a simultaneous experimental manipulation of selective attention. Results indicated that group differences in refractory periods differed as a function of attention condition. Children from higher SES backgrounds showed full neural recovery by 500 ms for attended stimuli, but required at least 1000 ms for unattended stimuli. In contrast, children from lower SES backgrounds showed similar refractory effects to attended and unattended stimuli, with full neural recovery by 500 ms. Thus, in higher SES children only, one functional consequence of selective attention is attenuation of the response to unattended stimuli, particularly at rapid ISIs, altering basic properties of the auditory refractory period. Together, these data indicate that differences in selective attention impact basic aspects of auditory processing in children from lower SES backgrounds. PMID:25003553

  9. Auditory and non-auditory effects of noise on health.

    Science.gov (United States)

    Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen

    2014-04-12

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105

  10. Virtual adult ears reveal the roles of acoustical factors and experience in auditory space map development.

    Science.gov (United States)

    Campbell, Robert A A; King, Andrew J; Nodal, Fernando R; Schnupp, Jan W H; Carlile, Simon; Doubell, Timothy P

    2008-11-01

    Auditory neurons in the superior colliculus (SC) respond preferentially to sounds from restricted directions to form a map of auditory space. The development of this representation is shaped by sensory experience, but little is known about the relative contribution of peripheral and central factors to the emergence of adult responses. By recording from the SC of anesthetized ferrets at different age points, we show that the map matures gradually after birth; the spatial receptive fields (SRFs) become more sharply tuned and topographic order emerges by the end of the second postnatal month. Principal components analysis of the head-related transfer function revealed that the time course of map development is mirrored by the maturation of the spatial cues generated by the growing head and external ears. However, using virtual acoustic space stimuli, we show that these acoustical changes are not by themselves responsible for the emergence of SC map topography. Presenting stimuli to infant ferrets through virtual adult ears did not improve the order in the representation of sound azimuth in the SC. But by using linear discriminant analysis to compare different response properties across age, we found that the SRFs of infant neurons nevertheless became more adult-like when stimuli were delivered through virtual adult ears. Hence, although the emergence of auditory topography is likely to depend on refinements in neural circuitry, maturation of the structure of the SRFs (particularly their spatial extent) can be largely accounted for by changes in the acoustics associated with growth of the head and ears. PMID:18987192

  11. Aberrant interference of auditory negative words on attention in patients with schizophrenia.

    Directory of Open Access Journals (Sweden)

    Norichika Iwashiro

    Full Text Available Previous research suggests that deficits in attention-emotion interaction are implicated in schizophrenia symptoms. Although disruption in auditory processing is crucial in the pathophysiology of schizophrenia, deficits in interaction between emotional processing of auditorily presented language stimuli and auditory attention have not yet been clarified. To address this issue, the current study used a dichotic listening task to examine 22 patients with schizophrenia and 24 age-, sex-, parental socioeconomic background-, handedness-, dexterous ear-, and intelligence quotient-matched healthy controls. The participants completed a word recognition task on the attended side in which a word with emotionally valenced content (negative/positive/neutral was presented to one ear and a different neutral word was presented to the other ear. Participants selectively attended to either ear. In the control subjects, presentation of negative but not positive word stimuli provoked a significantly prolonged reaction time compared with presentation of neutral word stimuli. This interference effect for negative words existed whether or not subjects directed attention to the negative words. This interference effect was significantly smaller in the patients with schizophrenia than in the healthy controls. Furthermore, the smaller interference effect was significantly correlated with severe positive symptoms and delusional behavior in the patients with schizophrenia. The present findings suggest that aberrant interaction between semantic processing of negative emotional content and auditory attention plays a role in production of positive symptoms in schizophrenia. (224 words.

  12. Auditory event-related responses to diphthongs in different attention conditions.

    Science.gov (United States)

    Morris, David J; Steinmetzger, Kurt; Tøndering, John

    2016-07-28

    The modulation of auditory event-related potentials (ERP) by attention generally results in larger amplitudes when stimuli are attended. We measured the P1-N1-P2 acoustic change complex elicited with synthetic overt (second formant, F2Δ=1000Hz) and subtle (F2Δ=100Hz) diphthongs, while subjects (i) attended to the auditory stimuli, (ii) ignored the auditory stimuli and watched a film, and (iii) diverted their attention to a visual discrimination task. Responses elicited by diphthongs where F2 values rose and fell were found to be different and this precluded their combined analysis. Multivariate analysis of ERP components from the rising F2 changes showed main effects of attention on P2 amplitude and latency, and N1-P2 amplitude. P2 amplitude decreased by 40% between the attend and ignore conditions, and by 60% between the attend and divert conditions. The effect of diphthong magnitude was significant for components from a broader temporal window which included P1 latency and N1 amplitude. N1 latency did not vary between attention conditions, a finding that may be related to stimulation with a continuous vowel. These data show that a discernible P1-N1-P2 response can be observed to subtle vowel quality transitions, even when the attention of a subject is diverted to an unrelated visual task. PMID:27158036

  13. Hierarchical image enhancement

    Science.gov (United States)

    Qi, Wei; Han, Jing; Zhang, Yi; Bai, Lian-fa

    2016-05-01

    Image enhancement is an important technique in computer vision. In this paper, we propose a hierarchical image enhancement approach based on the structure layer and texture layer. In the structure layer, we propose a structure-based method based on GMM, which better exploits structure details with fewer noise. In the texture layer, we present a structure-filtering method to filter unwanted texture with keeping completeness of detected salient structure. Next, we introduce a structure constraint prior to integrate them, leading to an improved enhancement result. Extensive experiments demonstrate that the proposed approach achieves higher quality results than previous approaches.

  14. Neural Correlates of an Auditory Afterimage in Primary Auditory Cortex

    OpenAIRE

    Noreña, A. J.; Eggermont, J. J.

    2003-01-01

    The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...

  15. Facial reactions in response to dynamic emotional stimuli in different modalities in patients suffering from schizophrenia: a behavioral and EMG study

    Directory of Open Access Journals (Sweden)

    Mariateresa Sestito

    2013-07-01

    Full Text Available Emotional facial expression is an important low-level mechanism contributing to the experience of empathy, thereby lying at the core of social interaction. Schizophrenia is associated with pervasive social cognitive impairments, including emotional processing of facial expressions. In this study we test a novel paradigm in order to investigate the evaluation of the emotional content of perceived emotions presented through dynamic expressive stimuli, facial mimicry evoked by the same stimuli, and their functional relation. Fifteen healthy controls and 15 patients diagnosed with schizophrenia were presented with stimuli portraying positive (laugh, negative (cry and neutral (control emotional stimuli in visual, auditory modalities in isolation, and congruently or incongruently associated. Participants where requested to recognize and quantitatively rate the emotional value of the perceived stimuli, while electromyographic activity of Corrugator and Zygomaticus muscles was recorded. All participants correctly judged the perceived emotional stimuli and prioritized the visual over the auditory modality in identifying the emotion when they were incongruently associated (Audio-Video Incongruent condition. The neutral emotional stimuli did not evoke any muscle responses and were judged by all participants as emotionally neutral. Control group responded with rapid and congruent mimicry to emotional stimuli, and in Incongruent condition muscle responses were driven by what participants saw rather than by what they heard. Patient group showed a similar pattern only with respect to negative stimuli, whereas showed a lack of or a non-specific Zygomaticus response when positive stimuli were presented. Finally, we found that only patients with reduced facial mimicry (Internalizers judged both positive and negative emotions as significantly more neutral than controls. The relevance of these findings for studying emotional deficits in schizophrenia is discussed.

  16. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard;

    2012-01-01

    Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  17. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  18. Hierarchical partial order ranking

    International Nuclear Information System (INIS)

    Assessing the potential impact on environmental and human health from the production and use of chemicals or from polluted sites involves a multi-criteria evaluation scheme. A priori several parameters are to address, e.g., production tonnage, specific release scenarios, geographical and site-specific factors in addition to various substance dependent parameters. Further socio-economic factors may be taken into consideration. The number of parameters to be included may well appear to be prohibitive for developing a sensible model. The study introduces hierarchical partial order ranking (HPOR) that remedies this problem. By HPOR the original parameters are initially grouped based on their mutual connection and a set of meta-descriptors is derived representing the ranking corresponding to the single groups of descriptors, respectively. A second partial order ranking is carried out based on the meta-descriptors, the final ranking being disclosed though average ranks. An illustrative example on the prioritisation of polluted sites is given. - Hierarchical partial order ranking of polluted sites has been developed for prioritization based on a large number of parameters

  19. Psychological and psychophysiological effects of auditory and visual stimuli during various modes of exercise

    OpenAIRE

    Jones, Leighton

    2014-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. This research programme had three principal objectives. First, to assess the stability of the exercise heart rate-music tempo preference relationship and its relevance to a range of psychological outcomes. Second, to explore the influence of two personal factors (motivational orientation and dominant attentional style) in a naturalistic exercise-to-music setting. Third, to examine me...

  20. Auditory Brainstem Response Wave Amplitude Characteristics as a Diagnostic Tool in Children with Speech Delay with Unknown Causes.

    Science.gov (United States)

    Abadi, Susan; Khanbabaee, Ghamartaj; Sheibani, Kourosh

    2016-09-01

    Speech delay with an unknown cause is a problem among children. This diagnosis is the last differential diagnosis after observing normal findings in routine hearing tests. The present study was undertaken to determine whether auditory brainstem responses to click stimuli are different between normally developing children and children suffering from delayed speech with unknown causes. In this cross-sectional study, we compared click auditory brainstem responses between 261 children who were clinically diagnosed with delayed speech with unknown causes based on normal routine auditory test findings and neurological examinations and had >12 months of speech delay (case group) and 261 age- and sex-matched normally developing children (control group). Our results indicated that the case group exhibited significantly higher wave amplitude responses to click stimuli (waves I, III, and V) than did the control group (P=0.001). These amplitudes were significantly reduced after 1 year (P=0.001); however, they were still significantly higher than those of the control group (P=0.001). The significant differences were seen regardless of the age and the sex of the participants. There were no statistically significant differences between the 2 groups considering the latency of waves I, III, and V. In conclusion, the higher amplitudes of waves I, III, and V, which were observed in the auditory brainstem responses to click stimuli among the patients with speech delay with unknown causes, might be used as a diagnostic tool to track patients' improvement after treatment. PMID:27582591

  1. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation.

    Science.gov (United States)

    Hames, Elizabeth' C; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C; Baker, Mary; Zupancic, Stephen; O'Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  2. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism: An EEG Power and BOLD fMRI Investigation

    Science.gov (United States)

    Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David

    2016-01-01

    Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020

  3. Mismatch responses in the awake rat: evidence from epidural recordings of auditory cortical fields.

    Directory of Open Access Journals (Sweden)

    Fabienne Jung

    Full Text Available Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN, a component of auditory evoked potentials (AEPs, reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.

  4. The modality effect of ego depletion: Auditory task modality reduces ego depletion.

    Science.gov (United States)

    Li, Qiong; Wang, Zhenhong

    2016-08-01

    An initial act of self-control that impairs subsequent acts of self-control is called ego depletion. The ego depletion phenomenon has been observed consistently. The modality effect refers to the effect of the presentation modality on the processing of stimuli. The modality effect was also robustly found in a large body of research. However, no study to date has examined the modality effects of ego depletion. This issue was addressed in the current study. In Experiment 1, after all participants completed a handgrip task, one group's participants completed a visual attention regulation task and the other group's participants completed an auditory attention regulation task, and then all participants again completed a handgrip task. The ego depletion phenomenon was observed in both the visual and the auditory attention regulation task. Moreover, participants who completed the visual task performed worse on the handgrip task than participants who completed the auditory task, which indicated that there was high ego depletion in the visual task condition. In Experiment 2, participants completed an initial task that either did or did not deplete self-control resources, and then they completed a second visual or auditory attention control task. The results indicated that depleted participants performed better on the auditory attention control task than the visual attention control task. These findings suggest that altering task modality may reduce ego depletion. PMID:27241617

  5. Visual-auditory differences in duration discrimination of intervals in the subsecond and second range

    Directory of Open Access Journals (Sweden)

    Thomas eRammsayer

    2015-10-01

    Full Text Available A common finding in time psychophysics is that temporal acuity is much better for auditory than for visual stimuli. The present study aimed to examine modality-specific differences in duration discrimination within the conceptual framework of the Distinct Timing Hypothesis. This theoretical account proposes that durations in the lower milliseconds range are processed automatically while longer durations are processed by a cognitive mechanism. A sample of 46 participants performed two auditory and visual duration discrimination tasks with extremely brief (50-ms standard duration and longer (1000-ms standard duration intervals. Better discrimination performance for auditory compared to visual intervals could be established for extremely brief and longer intervals. However, when performance on duration discrimination of longer intervals in the one-second range was controlled for modality-specific input from the sensory-automatic timing mechanism, the visual-auditory difference disappeared completely as indicated by virtually identical Weber fractions for both sensory modalities. These findings support the idea of a sensory-automatic mechanism underlying the observed visual-auditory differences in duration discrimination of extremely brief intervals in the millisecond range and longer intervals in the one-second range. Our data are consistent with the notion of a gradual transition from a purely modality-specific, sensory-automatic to a more cognitive, amodal timing mechanism. Within this transition zone, both mechanisms appear to operate simultaneously but the influence of the sensory-automatic timing mechanism is expected to continuously decrease with increasing interval duration.

  6. Assembly of the auditory circuitry by a Hox genetic network in the mouse brainstem.

    Directory of Open Access Journals (Sweden)

    Maria Di Bonito

    Full Text Available Rhombomeres (r contribute to brainstem auditory nuclei during development. Hox genes are determinants of rhombomere-derived fate and neuronal connectivity. Little is known about the contribution of individual rhombomeres and their associated Hox codes to auditory sensorimotor circuitry. Here, we show that r4 contributes to functionally linked sensory and motor components, including the ventral nucleus of lateral lemniscus, posterior ventral cochlear nuclei (VCN, and motor olivocochlear neurons. Assembly of the r4-derived auditory components is involved in sound perception and depends on regulatory interactions between Hoxb1 and Hoxb2. Indeed, in Hoxb1 and Hoxb2 mutant mice the transmission of low-level auditory stimuli is lost, resulting in hearing impairments. On the other hand, Hoxa2 regulates the Rig1 axon guidance receptor and controls contralateral projections from the anterior VCN to the medial nucleus of the trapezoid body, a circuit involved in sound localization. Thus, individual rhombomeres and their associated Hox codes control the assembly of distinct functionally segregated sub-circuits in the developing auditory brainstem.

  7. Reducing auditory hypersensitivities in autistic spectrum disorders: Preliminary findings evaluating the Listening Project Protocol

    Directory of Open Access Journals (Sweden)

    Stephen W Porges

    2014-08-01

    Full Text Available Auditory hypersensitivities are a common feature of autism spectrum disorder (ASD. In the present study the effectiveness of a novel intervention, the Listening Project Protocol (LPP was evaluated in two trials conducted with children diagnosed with ASD. LPP was developed to reduce auditory hypersensitivities. LPP is based on a theoretical neural exercise model that uses computer altered acoustic stimulation to recruit the neural regulation of middle ear muscles. Features of the intervention stimuli were informed by basic research in speech and hearing sciences that has identified the specific acoustic frequencies necessary to understand speech, which must pass through middle ear structures before being processed by other components of the auditory system. LPP was hypothesized to reduce auditory hypersensitivities by increasing the neural tone to the middle ear muscles to functionally dampen competing sounds in frequencies lower than human speech. The trials demonstrated that LPP, when contrasted to control conditions, selectively reduced auditory hypersensitivities. These findings are consistent with the Polyvagal Theory, which emphasizes the role of the middle ear muscles in social communication.

  8. To modulate and be modulated: estrogenic influences on auditory processing of communication signals within a socio-neuro-endocrine framework.

    Science.gov (United States)

    Yoder, Kathleen M; Vicario, David S

    2012-02-01

    Gonadal hormones modulate behavioral responses to sexual stimuli, and communication signals can also modulate circulating hormone levels. In several species, these combined effects appear to underlie a two-way interaction between circulating gonadal hormones and behavioral responses to socially salient stimuli. Recent work in songbirds has shown that manipulating local estradiol levels in the auditory forebrain produces physiological changes that affect discrimination of conspecific vocalizations and can affect behavior. These studies provide new evidence that estrogens can directly alter auditory processing and indirectly alter the behavioral response to a stimulus. These studies show that: 1) Local estradiol action within an auditory area is necessary for socially relevant sounds to induce normal physiological responses in the brains of both sexes; 2) These physiological effects occur much more quickly than predicted by the classical time-frame for genomic effects; 3) Estradiol action within the auditory forebrain enables behavioral discrimination among socially relevant sounds in males; and 4) Estradiol is produced locally in the male brain during exposure to particular social interactions. The accumulating evidence suggests a socio-neuro-endocrinology framework in which estradiol is essential to auditory processing, is increased by a socially relevant stimulus, acts rapidly to shape perception of subsequent stimuli experienced during social interactions, and modulates behavioral responses to these stimuli. Brain estrogens are likely to function similarly in both songbird sexes because aromatase and estrogen receptors are present in both male and female forebrain. Estrogenic modulation of perception in songbirds and perhaps other animals could fine-tune male advertising signals and female ability to discriminate them, facilitating mate selection by modulating behaviors. PMID:22201281

  9. Roughly Weighted Hierarchical Simple Games

    OpenAIRE

    Hameed, Ali; Slinko, Arkadii

    2012-01-01

    Hierarchical simple games - both disjunctive and conjunctive - are natural generalizations of simple majority games. They take their origin in the theory of secret sharing. Another important generalization of simple majority games with origin in economics and politics are weighted and roughly weighted majority games. In this paper we characterize roughly weighted hierarchical games identifying where the two approaches coincide.

  10. Modulation of auditory cortex response to pitch variation following training with microtonal melodies.

    Science.gov (United States)

    Zatorre, Robert J; Delhommeau, Karine; Zarate, Jean Mary

    2012-01-01

    We tested changes in cortical functional response to auditory patterns in a configural learning paradigm. We trained 10 human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music) and measured covariation in blood oxygenation signal to increasing pitch interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature that was trained. A psychophysical staircase procedure with feedback was used for training over a 2-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch interval size, such that those who had a higher sensitivity to pitch interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities. PMID:23227019

  11. Modulation of auditory cortex response to pitch variation following training with microtonal melodies

    Directory of Open Access Journals (Sweden)

    Robert J Zatorre

    2012-12-01

    Full Text Available We tested changes in cortical functional response to auditory configural learning by training ten human listeners to discriminate micromelodies (consisting of smaller pitch intervals than normally used in Western music. We measured covariation in blood oxygenation signal to increasing pitch-interval size in order to dissociate global changes in activity from those specifically associated with the stimulus feature of interest. A psychophysical staircase procedure with feedback was used for training over a two-week period. Behavioral tests of discrimination ability performed before and after training showed significant learning on the trained stimuli, and generalization to other frequencies and tasks; no learning occurred in an untrained control group. Before training the functional MRI data showed the expected systematic increase in activity in auditory cortices as a function of increasing micromelody pitch-interval size. This function became shallower after training, with the maximal change observed in the right posterior auditory cortex. Global decreases in activity in auditory regions, along with global increases in frontal cortices also occurred after training. Individual variation in learning rate was related to the hemodynamic slope to pitch-interval size, such that those who had a higher sensitivity to pitch-interval variation prior to learning achieved the fastest learning. We conclude that configural auditory learning entails modulation in the response of auditory cortex specifically to the trained stimulus feature. Reduction in blood oxygenation response to increasing pitch-interval size suggests that fewer computational resources, and hence lower neural recruitment, is associated with learning, in accord with models of auditory cortex function, and with data from other modalities.

  12. Endogenous auditory frequency-based attention modulates electroencephalogram-based measures of obligatory sensory activity in humans.

    Science.gov (United States)

    Sheedy, Caroline M; Power, Alan J; Reilly, Richard B; Crosse, Michael J; Loughnane, Gerard M; Lalor, Edmund C

    2014-03-01

    Auditory selective attention is the ability to enhance the processing of a single sound source, while simultaneously suppressing the processing of other competing sound sources. Recent research has addressed a long-running debate by showing that endogenous attention produces effects on obligatory sensory responses to continuous and competing auditory stimuli. However, until now, this result has only been shown under conditions where the competing stimuli differed in both their frequency characteristics and, importantly, their spatial location. Thus, it is unknown whether endogenous selective attention based only on nonspatial features modulates obligatory sensory processing. Here, we investigate this issue using a diotic paradigm, such that competing auditory stimuli differ in frequency, but had no separation in space. We find a significant effect of attention on electroencephalogram-based measures of obligatory sensory processing at several poststimulus latencies. We discuss these results in terms of previous research on feature-based attention and by comparing our findings with the previous work using stimuli that differed both in terms of spatial and frequency-based characteristics. PMID:24231831

  13. Effective stimuli for constructing reliable neuron models.

    Directory of Open Access Journals (Sweden)

    Shaul Druckmann

    2011-08-01

    Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

  14. Hierarchical Affinity Propagation

    CERN Document Server

    Givoni, Inmar; Frey, Brendan J

    2012-01-01

    Affinity propagation is an exemplar-based clustering algorithm that finds a set of data-points that best exemplify the data, and associates each datapoint with one exemplar. We extend affinity propagation in a principled way to solve the hierarchical clustering problem, which arises in a variety of domains including biology, sensor networks and decision making in operational research. We derive an inference algorithm that operates by propagating information up and down the hierarchy, and is efficient despite the high-order potentials required for the graphical model formulation. We demonstrate that our method outperforms greedy techniques that cluster one layer at a time. We show that on an artificial dataset designed to mimic the HIV-strain mutation dynamics, our method outperforms related methods. For real HIV sequences, where the ground truth is not available, we show our method achieves better results, in terms of the underlying objective function, and show the results correspond meaningfully to geographi...

  15. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  16. Associative Hierarchical Random Fields.

    Science.gov (United States)

    Ladický, L'ubor; Russell, Chris; Kohli, Pushmeet; Torr, Philip H S

    2014-06-01

    This paper makes two contributions: the first is the proposal of a new model-The associative hierarchical random field (AHRF), and a novel algorithm for its optimization; the second is the application of this model to the problem of semantic segmentation. Most methods for semantic segmentation are formulated as a labeling problem for variables that might correspond to either pixels or segments such as super-pixels. It is well known that the generation of super pixel segmentations is not unique. This has motivated many researchers to use multiple super pixel segmentations for problems such as semantic segmentation or single view reconstruction. These super-pixels have not yet been combined in a principled manner, this is a difficult problem, as they may overlap, or be nested in such a way that the segmentations form a segmentation tree. Our new hierarchical random field model allows information from all of the multiple segmentations to contribute to a global energy. MAP inference in this model can be performed efficiently using powerful graph cut based move making algorithms. Our framework generalizes much of the previous work based on pixels or segments, and the resulting labelings can be viewed both as a detailed segmentation at the pixel level, or at the other extreme, as a segment selector that pieces together a solution like a jigsaw, selecting the best segments from different segmentations as pieces. We evaluate its performance on some of the most challenging data sets for object class segmentation, and show that this ability to perform inference using multiple overlapping segmentations leads to state-of-the-art results. PMID:26353271

  17. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence.

    Directory of Open Access Journals (Sweden)

    Helen Henshaw

    Full Text Available BACKGROUND: Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. OBJECTIVE: This systematic review (PROSPERO 2011: CRD42011001406 evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. METHODS: A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. RESULTS: Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles, cognition (1/1 articles and self-reported hearing abilities (1/2 articles, improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. CONCLUSIONS: Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss.

  18. Attention deficits revealed by passive auditory change detection for pure tones and lexical tones in ADHD children

    Directory of Open Access Journals (Sweden)

    Ming-Tao eYang

    2015-08-01

    Full Text Available Inattention has been a major problem in children with attention deficit/hyperactivity disorder (ADHD, accounting for their behavioral and cognitive dysfunctions. However, there are at least three processing steps underlying attentional control for auditory change detection, namely pre-attentive change detection, involuntary attention orienting, and attention reorienting for further evaluation. This study aimed to examine whether children with ADHD would show deficits in any of these subcomponents by using mismatch negativity (MMN, P3a, and late discriminative negativity (LDN as event-related potential (ERP markers, under the passive auditory oddball paradigm. Two types of stimuli - pure tones and Mandarin lexical tones - were used to examine if the deficits were general across linguistic and non-linguistic domains. Participants included 15 native Mandarin-speaking children with ADHD and 16 age-matched controls (across groups, age ranged between 6 and 15 years. Two passive auditory oddball paradigms (lexical tones and pure tones were applied. Pure tone paradigm included standard stimuli (1000 Hz, 80% and two deviant stimuli (1015 Hz and 1090 Hz, 10% each. The Mandarin lexical tone paradigm’s standard stimuli was /yi3/ (80% and two deviant stimuli were /yi1/ and /yi2/ (10% each. The results showed no MMN difference, but did show attenuated P3a and enhanced LDN to the large deviants for both pure and lexical tone changes in the ADHD group. Correlation analysis showed that children with higher ADHD tendency, as indexed by parents’ and teachers’ rating on ADHD symptoms, showed less positive P3a amplitudes when responding to large lexical tone deviants. Thus, children with ADHD showed impaired auditory change detection for both pure tones and lexical tones in both involuntary attention switching, and attention reorienting for further evaluation. These ERP markers may therefore be used for evaluation of anti-ADHD drugs that aim to alleviate these

  19. Cortical Auditory Event Related Potentials (P300) for Frequency Changing Dynamic Tones

    Science.gov (United States)

    Kalaiah, Mohan Kumar

    2016-01-01

    Background and Objectives P300 has been studied with a variety of stimuli. However, the nature of P300 has not been investigated for deviant stimuli which change its characteristics from standard stimuli after a period of time from onset. Subjects and Methods Nine young adults with normal hearing participated in the study. The P300 was elicited using an oddball paradigm, the probability of standard and deviant stimuli was 80% and 20% respectively. Six stimuli were used to elicit P300, it included two pure-tones (1,000 Hz and 2,000 Hz) and four tone-complexes (tones with frequency changes). Among these stimuli, 1,000 Hz tone served as standard while others served as deviant stimuli. The P300 was recorded in five separate blocks, with one of the deviant stimuli as target in each block. Electroencephalographic was recorded from electrode sites Fz, Cz, C3, C4, and Pz. Latency and amplitude of components of the cortical auditory evoked potentials were measured at Cz. Results Waveforms obtained in the present study shows that, all the deviant stimuli elicited obligatory P1-N1-P2 for stimulus onset. 2,000 Hz deviant tone elicited P300 at a latency of 300 ms. While, tone-complexes elicited acoustic change complex (ACC) for frequency changes and finally elicited P300 at a latency of 600 ms. In addition, the results showed shorter latency and larger amplitude ACC and P300 for rising tone-complexes compared to falling tone-complexes. Conclusions Tone-complexes elicited distinct waveforms compared to 2,000 Hz deviant tone. Rising tone-complexes which had an increase in frequency elicited shorter latency and larger amplitude responses, which could be attributed to perceptual bias for frequency changes. PMID:27144230

  20. Modeling hierarchical structures - Hierarchical Linear Modeling using MPlus

    CERN Document Server

    Jelonek, M

    2006-01-01

    The aim of this paper is to present the technique (and its linkage with physics) of overcoming problems connected to modeling social structures, which are typically hierarchical. Hierarchical Linear Models provide a conceptual and statistical mechanism for drawing conclusions regarding the influence of phenomena at different levels of analysis. In the social sciences it is used to analyze many problems such as educational, organizational or market dilemma. This paper introduces the logic of modeling hierarchical linear equations and estimation based on MPlus software. I present my own model to illustrate the impact of different factors on school acceptation level.

  1. Functional maps of human auditory cortex: effects of acoustic features and attention.

    Directory of Open Access Journals (Sweden)

    David L Woods

    , ARMs increased in amplitude throughout stimulus blocks. CONCLUSIONS/SIGNIFICANCE: The results are consistent with the view that medial regions of human auditory cortex contain tonotopically organized core and belt fields that map the basic acoustic features of sounds while surrounding higher-order parabelt regions are tuned to more abstract stimulus attributes. Intermodal selective attention enhances processing in neuronal populations that are partially distinct from those activated by unattended stimuli.

  2. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Alva Engell

    Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  3. Material differences of auditory source retrieval:Evidence from event-related potential studies

    Institute of Scientific and Technical Information of China (English)

    NIE AiQing; GUO ChunYan; SHEN MoWei

    2008-01-01

    Two event-related potential experiments were conducted to investigate the temporal and the spatial distributions of the old/new effects for the item recognition task and the auditory source retrieval task using picture and Chinese character as stimuli respectively. Stimuli were presented on the center of the screen with their names read out either by female or by male voice simultaneously during the study phase and then two testa were performed separately. One test task was to differentiate the old items from the new ones, and the other task was to judge the items read out by a certain voice during the study phase as targets and other ones as non-targets. The results showed that the old/new effect of the auditory source retrieval task was more sustained over time than that of the item recognition task in both experiments, and the spatial distribution of the former effect was wider than that of the latter one. Both experiments recorded reliable old/new effect over the prefrontal cortex during the source retrieval task. However, there existed some differences of the old/new effect for the auditory source retrieval task between picture and Chinese character, and LORETA source analysis indicated that the differ-ences might be rooted in the temporal lobe. These findings demonstrate that the relevancy of the old/new effects between the item recognition task and the auditory source retrieval task supports the dual-process model; the spatial and the temporal distributions of the old/new effect elicited by the auditory source retrieval task are regulated by both the feature of the experimental material and the perceptual attribute of the voice.

  4. Cortical Gating of Oropharyngeal Sensory Stimuli

    OpenAIRE

    KarenWheeler-Hegland

    2010-01-01

    Somatosensory evoked potentials provide a measure of cortical neuronal activation in response to various types of sensory stimuli. In order to prevent flooding of the cortex with redundant information various sensory stimuli are gated cortically such that response to stimulus 2 (S2) is significantly reduced in amplitude compared to stimulus 1 (S1). Upper airway protective mechanisms, such as swallowing and cough, are dependent on sensory input for triggering and modifying their motor output. ...

  5. Effective stimuli for constructing reliable neuron models.

    OpenAIRE

    Shaul Druckmann; Berger, Thomas K.; Felix Schürmann; Sean Hill; Henry Markram; Idan Segev

    2011-01-01

    Author Summary Neurons perform complicated non-linear transformations on their input before producing their output - a train of action potentials. This input-output transformation is shaped by the specific composition of ion channels, out of the many possible types, that are embedded in the neuron's membrane. Experimentally, characterizing this transformation relies on injecting different stimuli to the neuron while recording its output; but which of the many possible stimuli should one apply...

  6. Binocular combination of second-order stimuli.

    Science.gov (United States)

    Zhou, Jiawei; Liu, Rong; Zhou, Yifeng; Hess, Robert F

    2014-01-01

    Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated) and second-order (contrast modulated) stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements. PMID:24404180

  7. Binocular combination of second-order stimuli.

    Directory of Open Access Journals (Sweden)

    Jiawei Zhou

    Full Text Available Phase information is a fundamental aspect of visual stimuli. However, the nature of the binocular combination of stimuli defined by modulations in contrast, so-called second-order stimuli, is presently not clear. To address this issue, we measured binocular combination for first- (luminance modulated and second-order (contrast modulated stimuli using a binocular phase combination paradigm in seven normal adults. We found that the binocular perceived phase of second-order gratings depends on the interocular signal ratio as has been previously shown for their first order counterparts; the interocular signal ratios when the two eyes were balanced was close to 1 in both first- and second-order phase combinations. However, second-order combination is more linear than previously found for first-order combination. Furthermore, binocular combination of second-order stimuli was similar regardless of whether the carriers in the two eyes were correlated, anti-correlated, or uncorrelated. This suggests that, in normal adults, the binocular phase combination of second-order stimuli occurs after the monocular extracting of the second-order modulations. The sensory balance associated with this second-order combination can be obtained from binocular phase combination measurements.

  8. Hierarchical multifunctional nanocomposites

    Science.gov (United States)

    Ghasemi-Nejhad, Mehrdad N.

    2014-03-01

    properties of the fibers can also be improved by the growth of nanotubes on the fibers. The combination of the two will produce super-performing materials, not currently available. Since the improvement of fiber starts with carbon nanotube grown on micron-size fibers (and matrix with a nanomaterial) to give the macro-composite, this process is a bottom-up "hierarchical" advanced manufacturing process, and since the resulting nanocomposites will have "multifunctionality" with improve properties in various functional areas such as chemical and fire resistance, damping, stiffness, strength, fracture toughness, EMI shielding, and electrical and thermal conductivity, the resulting nanocomposites are in fact "multifunctional hierarchical nanocomposites." In this paper, the current state of knowledge in processing, performance, and characterization of these materials are addressed.

  9. Modeling auditory-nerve responses to electrical stimulation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    2014-01-01

    Cochlear implants (CI) directly stimulate the auditory nerve (AN), bypassing the mechano-electricaltransduction in the inner ear. Trains of biphasic, charge-balanced pulses (anodic and cathodic) areused as stimuli to avoid damage of the tissue. The pulses of either polarity are capable of producing...... andcathodic stimulation of the AN of cat [4]. The models' responses to the electrical pulses of variousshapes [5] were also analyzed. It was found that, while the models can account for the ring rates inresponse to various biphasic pulse shapes, they fail to correctly describe the timing of AP in response to...... monophasic pulses. Strategies for improving the model performance with respect to correct AP timing are discussed. Eventually, a model that is able to account for correct spike timing in electrichearing will be useful for objective evaluation and improvement of CI stimulation strategies....

  10. Cortical and thalamic connectivity of the auditory anterior ectosylvian cortex of early-deaf cats: Implications for neural mechanisms of crossmodal plasticity.

    Science.gov (United States)

    Meredith, M Alex; Clemo, H Ruth; Corley, Sarah B; Chabot, Nicole; Lomber, Stephen G

    2016-03-01

    Early hearing loss leads to crossmodal plasticity in regions of the cerebrum that are dominated by acoustical processing in hearing subjects. Until recently, little has been known of the connectional basis of this phenomenon. One region whose crossmodal properties are well-established is the auditory field of the anterior ectosylvian sulcus (FAES) in the cat, where neurons are normally responsive to acoustic stimulation and its deactivation leads to the behavioral loss of accurate orienting toward auditory stimuli. However, in early-deaf cats, visual responsiveness predominates in the FAES and its deactivation blocks accurate orienting behavior toward visual stimuli. For such crossmodal reorganization to occur, it has been presumed that novel inputs or increased projections from non-auditory cortical areas must be generated, or that existing non-auditory connections were 'unmasked.' These possibilities were tested using tracer injections into the FAES of adult cats deafened early in life (and hearing controls), followed by light microscopy to localize retrogradely labeled neurons. Surprisingly, the distribution of cortical and thalamic afferents to the FAES was very similar among early-deaf and hearing animals. No new visual projection sources were identified and visual cortical connections to the FAES were comparable in projection proportions. These results support an alternate theory for the connectional basis for cross-modal plasticity that involves enhanced local branching of existing projection terminals that originate in non-auditory as well as auditory cortices. PMID:26724756

  11. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  12. Auditory affective norms for German: testing the influence of depression and anxiety on valence and arousal ratings.

    Directory of Open Access Journals (Sweden)

    Philipp Kanske

    Full Text Available BACKGROUND: The study of emotional speech perception and emotional prosody necessitates stimuli with reliable affective norms. However, ratings may be affected by the participants' current emotional state as increased anxiety and depression have been shown to yield altered neural responding to emotional stimuli. Therefore, the present study had two aims, first to provide a database of emotional speech stimuli and second to probe the influence of depression and anxiety on the affective ratings. METHODOLOGY/PRINCIPAL FINDINGS: We selected 120 words from the Leipzig Affective Norms for German database (LANG, which includes visual ratings of positive, negative, and neutral word stimuli. These words were spoken by a male and a female native speaker of German with the respective emotional prosody, creating a total set of 240 auditory emotional stimuli. The recordings were rated again by an independent sample of subjects for valence and arousal, yielding groups of highly arousing negative or positive stimuli and neutral stimuli low in arousal. These ratings were correlated with participants' emotional state measured with the Depression Anxiety Stress Scales (DASS. Higher depression scores were related to more negative valence of negative and positive, but not neutral words. Anxiety scores correlated with increased arousal and more negative valence of negative words. CONCLUSIONS/SIGNIFICANCE: These results underscore the importance of representatively distributed depression and anxiety scores in participants of affective rating studies. The LANG-audition database, which provides well-controlled, short-duration auditory word stimuli for the experimental investigation of emotional speech is available in Supporting Information S1.

  13. Onboard hierarchical network

    Science.gov (United States)

    Tunesi, Luca; Armbruster, Philippe

    2004-02-01

    The objective of this paper is to demonstrate a suitable hierarchical networking solution to improve capabilities and performances of space systems, with significant recurrent costs saving and more efficient design & manufacturing flows. Classically, a satellite can be split in two functional sub-systems: the platform and the payload complement. The platform is in charge of providing power, attitude & orbit control and up/down-link services, whereas the payload represents the scientific and/or operational instruments/transponders and embodies the objectives of the mission. One major possibility to improve the performance of payloads, by limiting the data return to pertinent information, is to process data on board thanks to a proper implementation of the payload data system. In this way, it is possible to share non-recurring development costs by exploiting a system that can be adopted by the majority of space missions. It is believed that the Modular and Scalable Payload Data System, under development by ESA, provides a suitable solution to fulfil a large range of future mission requirements. The backbone of the system is the standardised high data rate SpaceWire network http://www.ecss.nl/. As complement, a lower speed command and control bus connecting peripherals is required. For instance, at instrument level, there is a need for a "local" low complexity bus, which gives the possibility to command and control sensors and actuators. Moreover, most of the connections at sub-system level are related to discrete signals management or simple telemetry acquisitions, which can easily and efficiently be handled by a local bus. An on-board hierarchical network can therefore be defined by interconnecting high-speed links and local buses. Additionally, it is worth stressing another important aspect of the design process: Agencies and ESA in particular are frequently confronted with a big consortium of geographically spread companies located in different countries, each one

  14. Hierarchical fringe tracking

    CERN Document Server

    Petrov, Romain G; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stephane; Bresson, Yves; Benkhaldoum, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu

    2014-01-01

    The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups. The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by g...

  15. Action Effects and Task Knowledge: The Influence of Anticipatory Priming on the Identification of Task-Related Stimuli in Experts

    Science.gov (United States)

    Land, William M.

    2016-01-01

    The purpose of the present study was to examine the extent to which anticipation of an action’s perceptual effect primes identification of task-related stimuli. Specifically, skilled (n = 16) and novice (n = 24) tennis players performed a choice-reaction time (CRT) test in which they identified whether the presented stimulus was a picture of a baseball bat or tennis racket. Following their response, auditory feedback associated with either baseball or tennis was presented. The CRT test was performed in blocks in which participants predictably received the baseball sound or tennis sound irrespective of which stimulus picture was displayed. Results indicated that skilled tennis players responded quicker to tennis stimuli when the response was predictably followed by the tennis auditory effect compared to the baseball auditory effect. These findings imply that, within an individual’s area of expertise, domain-relevant knowledge is primed by anticipation of an action’s perceptual effect, thus allowing the cognitive system to more quickly identify environmental information. This finding provides a more complete picture of the influence that anticipation can have on the cognitive-motor system. No differences existed for novices. PMID:27272987

  16. Action Effects and Task Knowledge: The Influence of Anticipatory Priming on the Identification of Task-Related Stimuli in Experts.

    Directory of Open Access Journals (Sweden)

    William M Land

    Full Text Available The purpose of the present study was to examine the extent to which anticipation of an action's perceptual effect primes identification of task-related stimuli. Specifically, skilled (n = 16 and novice (n = 24 tennis players performed a choice-reaction time (CRT test in which they identified whether the presented stimulus was a picture of a baseball bat or tennis racket. Following their response, auditory feedback associated with either baseball or tennis was presented. The CRT test was performed in blocks in which participants predictably received the baseball sound or tennis sound irrespective of which stimulus picture was displayed. Results indicated that skilled tennis players responded quicker to tennis stimuli when the response was predictably followed by the tennis auditory effect compared to the baseball auditory effect. These findings imply that, within an individual's area of expertise, domain-relevant knowledge is primed by anticipation of an action's perceptual effect, thus allowing the cognitive system to more quickly identify environmental information. This finding provides a more complete picture of the influence that anticipation can have on the cognitive-motor system. No differences existed for novices.

  17. Hierarchical clustering for graph visualization

    CERN Document Server

    Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi

    2012-01-01

    This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.

  18. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  19. Dissecting the functional anatomy of auditory word repetition

    Directory of Open Access Journals (Sweden)

    Thomas Matthew Hadley Hope

    2014-05-01

    Full Text Available Auditory word repetition involves many different brain regions, whose functions are still far from fully understood. Here, we use a single, multi-factorial, within-subjects fMRI design to identify those regions, and to functionally distinguish the multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1 auditory to visual inputs; (2 phonological to non-phonological inputs; (3 semantic to non-semantic inputs; and (4 speech production to finger-press responses. The stimuli included words (semantic and phonological inputs, pseudowords (phonological input, pictures and sounds of animals or objects (semantic input, and coloured patterns and hums (non-semantic and non-phonological. The speech production tasks involved auditory repetition, reading and naming while the finger press tasks involved one-back matching.The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to distinguish several different regions within: (1 the anterior insula (a dorsal region involved in both covert and overt speech production, and a more ventral region involved in overt speech only; (2 the pars orbitalis (with distinct sub-regions responding to phonological and semantic processing; (3 the anterior cingulate and SMA (whose subregions show differential sensitivity to speech and finger press responses; and (4 the cerebellum (with distinct regions for semantic processing, speech production and domain general processing. We also dissociated four different types of phonological effects in, respectively, the left superior temporal sulcus, left putamen, left ventral premoto

  20. The Representation of Prediction Error in Auditory Cortex

    Science.gov (United States)

    Rubin, Jonathan; Ulanovsky, Nachum; Tishby, Naftali

    2016-01-01

    To survive, organisms must extract information from the past that is relevant for their future. How this process is expressed at the neural level remains unclear. We address this problem by developing a novel approach from first principles. We show here how to generate low-complexity representations of the past that produce optimal predictions of future events. We then illustrate this framework by studying the coding of ‘oddball’ sequences in auditory cortex. We find that for many neurons in primary auditory cortex, trial-by-trial fluctuations of neuronal responses correlate with the theoretical prediction error calculated from the short-term past of the stimulation sequence, under constraints on the complexity of the representation of this past sequence. In some neurons, the effect of prediction error accounted for more than 50% of response variability. Reliable predictions often depended on a representation of the sequence of the last ten or more stimuli, although the representation kept only few details of that sequence. PMID:27490251

  1. Psychophysiological responses to auditory change.

    Science.gov (United States)

    Chuen, Lorraine; Sears, David; McAdams, Stephen

    2016-06-01

    A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928

  2. Short term memory for tactile stimuli.

    Science.gov (United States)

    Gallace, Alberto; Tan, Hong Z; Haggard, Patrick; Spence, Charles

    2008-01-23

    Research has shown that unreported information stored in rapidly decaying visual representations may be accessed more accurately using partial report than using full report procedures (e.g., [Sperling, G., 1960. The information available in brief visual presentations. Psychological Monographs, 74, 1-29.]). In the 3 experiments reported here, we investigated whether unreported information regarding the actual number of tactile stimuli presented in parallel across the body surface can be accessed using a partial report procedure. In Experiment 1, participants had to report the total number of stimuli in a tactile display composed of up to 6 stimuli presented across their body (numerosity task), or else to detect whether or not a tactile stimulus had previously been presented in a position indicated by a visual probe given at a variable delay after offset of a tactile display (i.e., partial report). The results showed that participants correctly reported up to 3 stimuli in the numerosity judgment task, but their performance was significantly better than chance when up to 5 stimuli were presented in the partial report task. This result shows that short-lasting tactile representations can be accessed using partial report procedures similar to those used previously in visual studies. Experiment 2 showed that the duration of these representations (or the time available to consciously access them) depends on the number of stimuli presented in the display (the greater the number of stimuli that are presented, the faster their representation decays). Finally, the results of a third experiment showed that the differences in performance between the numerosity judgment and partial report tasks could not be explained solely in terms of any difference in task difficulty. PMID:18083147

  3. Auditory distraction and serial memory

    OpenAIRE

    Jones, D M; Hughes, Rob; Macken, W.J.

    2010-01-01

    One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...

  4. An exploration of spatial auditory BCI paradigms with different sounds: music notes versus beeps.

    Science.gov (United States)

    Huang, Minqiang; Daly, Ian; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2016-06-01

    Visual brain-computer interfaces (BCIs) are not suitable for people who cannot reliably maintain their eye gaze. Considering that this group usually maintains audition, an auditory based BCI may be a good choice for them. In this paper, we explore two auditory patterns: (1) a pattern utilizing symmetrical spatial cues with multiple frequency beeps [called the high low medium (HLM) pattern], and (2) a pattern utilizing non-symmetrical spatial cues with six tones derived from the diatonic scale [called the diatonic scale (DS) pattern]. These two patterns are compared to each other in terms of accuracy to determine which auditory pattern is better. The HLM pattern uses three different frequency beeps and has a symmetrical spatial distribution. The DS pattern uses six spoken stimuli, which are six notes solmizated as "do", "re", "mi", "fa", "sol" and "la", and derived from the diatonic scale. These six sounds are distributed to six, spatially distributed, speakers. Thus, we compare a BCI paradigm using beeps with another BCI paradigm using tones on the diatonic scale, when the stimuli are spatially distributed. Although no significant differences are found between the ERPs, the HLM pattern performs better than the DS pattern: the online accuracy achieved with the HLM pattern is significantly higher than that achieved with the DS pattern (p = 0.0028). PMID:27275376

  5. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention

    Science.gov (United States)

    Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-01-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  6. Music for the birds: effects of auditory enrichment on captive bird species.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2016-01-01

    With the increase of mixed species exhibits in zoos, targeting enrichment for individual species may be problematic. Often, mammals may be the primary targets of enrichment, yet other species that share their environment (such as birds) will unavoidably be exposed to the enrichment as well. The purpose of this study was to determine if (1) auditory stimuli designed for enrichment of primates influenced the behavior of captive birds in the zoo setting, and (2) if the specific type of auditory enrichment impacted bird behavior. Three different African bird species were observed at the Buffalo Zoo during exposure to natural sounds, classical music and rock music. The results revealed that the average frequency of flying in all three bird species increased with naturalistic sounds and decreased with rock music (F = 7.63, df = 3,6, P = 0.018); vocalizations for two of the three species (Superb Starlings and Mousebirds) increased (F = 18.61, df = 2,6, P = 0.0027) in response to all auditory stimuli, however one species (Lady Ross's Turacos) increased frequency of duetting only in response to rock music (X(2) = 18.5, df = 2, P influence behavior in non-target species as well, in this case leading to increased activity by birds. PMID:26749511

  7. Shaping prestimulus neural activity with auditory rhythmic stimulation improves the temporal allocation of attention.

    Science.gov (United States)

    Ronconi, Luca; Pincham, Hannah L; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes

    2016-05-01

    Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506

  8. Effects of Auditory Attention Training with the Dichotic Listening Task: Behavioural and Neurophysiological Evidence.

    Directory of Open Access Journals (Sweden)

    Jussi Tallus

    Full Text Available Facilitation of general cognitive capacities such as executive functions through training has stirred considerable research interest during the last decade. Recently we demonstrated that training of auditory attention with forced attention dichotic listening not only facilitated that performance but also generalized to an untrained attentional task. In the present study, 13 participants underwent a 4-week dichotic listening training programme with instructions to report syllables presented to the left ear (FL training group. Another group (n = 13 was trained using the non-forced instruction, asked to report whichever syllable they heard the best (NF training group. The study aimed to replicate our previous behavioural results, and to explore the neurophysiological correlates of training through event-related brain potentials (ERPs. We partially replicated our previous behavioural training effects, as the FL training group tended to show more allocation of auditory spatial attention to the left ear in a standard dichotic listening task. ERP measures showed diminished N1 and enhanced P2 responses to dichotic stimuli after training in both groups, interpreted as improvement in early perceptual processing of the stimuli. Additionally, enhanced anterior N2 amplitudes were found after training, with relatively larger changes in the FL training group in the forced-left condition, suggesting improved top-down control on the trained task. These results show that top-down cognitive training can modulate the left-right allocation of auditory spatial attention, accompanied by a change in an evoked brain potential related to cognitive control.

  9. Hierarchical architecture of active knits

    International Nuclear Information System (INIS)

    Nature eloquently utilizes hierarchical structures to form the world around us. Applying the hierarchical architecture paradigm to smart materials can provide a basis for a new genre of actuators which produce complex actuation motions. One promising example of cellular architecture—active knits—provides complex three-dimensional distributed actuation motions with expanded operational performance through a hierarchically organized structure. The hierarchical structure arranges a single fiber of active material, such as shape memory alloys (SMAs), into a cellular network of interlacing adjacent loops according to a knitting grid. This paper defines a four-level hierarchical classification of knit structures: the basic knit loop, knit patterns, grid patterns, and restructured grids. Each level of the hierarchy provides increased architectural complexity, resulting in expanded kinematic actuation motions of active knits. The range of kinematic actuation motions are displayed through experimental examples of different SMA active knits. The results from this paper illustrate and classify the ways in which each level of the hierarchical knit architecture leverages the performance of the base smart material to generate unique actuation motions, providing necessary insight to best exploit this new actuation paradigm. (paper)

  10. Multimodal Diffusion-MRI and MEG Assessment of Auditory and Language System Development in Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Jeffrey I Berman

    2016-03-01

    Full Text Available Background: Auditory processing and language impairments are prominent in children with autism spectrum disorder (ASD. The present study integrated diffusion MR measures of white-matter microstructure and magnetoencephalography (MEG measures of cortical dynamics to investigate associations between brain structure and function within auditory and language systems in ASD. Based on previous findings, abnormal structure-function relationships in auditory and language systems in ASD were hypothesized. Methods: Evaluable neuroimaging data was obtained from 44 typically developing (TD children (mean age 10.4±2.4years and 95 children with ASD (mean age 10.2±2.6years. Diffusion MR tractography was used to delineate and quantitatively assess the auditory radiation and arcuate fasciculus segments of the auditory and language systems. MEG was used to measure (1 superior temporal gyrus auditory evoked M100 latency in response to pure-tone stimuli as an indicator of auditory system conduction velocity, and (2 auditory vowel-contrast mismatch field (MMF latency as a passive probe of early linguistic processes. Results: Atypical development of white matter and cortical function, along with atypical lateralization, were present in ASD. In both auditory and language systems, white matter integrity and cortical electrophysiology were found to be coupled in typically developing children, with white matter microstructural features contributing significantly to electrophysiological response latencies. However, in ASD, we observed uncoupled structure-function relationships in both auditory and language systems. Regression analyses in ASD indicated that factors other than white-matter microstructure additionally contribute to the latency of neural evoked responses and ultimately behavior. Results also indicated that whereas delayed M100 is a marker for ASD severity, MMF delay is more associated with language impairment. Conclusion: Present findings suggest atypical

  11. Auditory learning: a developmental method.

    Science.gov (United States)

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990

  12. Auditory sequence analysis and phonological skill.

    Science.gov (United States)

    Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D

    2012-11-01

    This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739

  13. Bayesian Modeling of the Dynamics of Phase Modulations and their Application to Auditory Event Related Potentials at Different Loudness Scales.

    Science.gov (United States)

    Mortezapouraghdam, Zeinab; Wilson, Robert C; Schwabe, Lars; Strauss, Daniel J

    2016-01-01

    We study the effect of long-term habituation signatures of auditory selective attention reflected in the instantaneous phase information of the auditory event-related potentials (ERPs) at four distinct stimuli levels of 60, 70, 80, and 90 dB SPL. The analysis is based on the single-trial level. The effect of habituation can be observed in terms of the changes (jitter) in the instantaneous phase information of ERPs. In particular, the absence of habituation is correlated with a consistently high phase synchronization over ERP trials. We estimate the changes in phase concentration over trials using a Bayesian approach, in which the phase is modeled as being drawn from a von Mises distribution with a concentration parameter which varies smoothly over trials. The smoothness assumption reflects the fact that habituation is a gradual process. We differentiate between different stimuli based on the relative changes and absolute values of the estimated concentration parameter using the proposed Bayesian model. PMID:26858631

  14. Development of Receiver Stimulator for Auditory Prosthesis

    OpenAIRE

    K. Raja Kumar; P. Seetha Ramaiah

    2010-01-01

    The Auditory Prosthesis (AP) is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP) and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP) consists of Speech Data Decoder, DAC, ADC, constant...

  15. Auditory stimulation and cardiac autonomic regulation

    OpenAIRE

    Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu

    2012-01-01

    Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...

  16. Behavioural and neural correlates of auditory attention

    OpenAIRE

    Roberts, Katherine Leonie

    2005-01-01

    The auditory attention skills of alterting, orienting, and executive control were assessed using behavioural and neuroimaging techniques. Initially, an auditory analgue of the visual attention network test (ANT) (FAN, McCandliss, Sommer, Raz, & Posner, 2002) was created and tested alongside the visual ANT in a group of 40 healthy subjects. The results from this study showed similarities between auditory and visual spatial orienting. An fMRI study was conducted to investigate whether the simil...

  17. The effect of auditory memory load on intensity resolution in individuals with Parkinson's disease

    Science.gov (United States)

    Richardson, Kelly C.

    Purpose: The purpose of the current study was to investigate the effect of auditory memory load on intensity resolution in individuals with Parkinson's disease (PD) as compared to two groups of listeners without PD. Methods: Nineteen individuals with Parkinson's disease, ten healthy age- and hearing-matched adults, and ten healthy young adults were studied. All listeners participated in two intensity discrimination tasks differing in auditory memory load; a lower memory load, 4IAX task and a higher memory load, ABX task. Intensity discrimination performance was assessed using a bias-free measurement of signal detectability known as d' (d-prime). Listeners further participated in a continuous loudness scaling task where they were instructed to rate the loudness level of each signal intensity using a computerized 150mm visual analogue scale. Results: Group discrimination functions indicated significantly lower intensity discrimination sensitivity (d') across tasks for the individuals with PD, as compared to the older and younger controls. No significant effect of aging on intensity discrimination was observed for either task. All three listeners groups demonstrated significantly lower intensity discrimination sensitivity for the higher auditory memory load, ABX task, compared to the lower auditory memory load, 4IAX task. Furthermore, a significant effect of aging was identified for the loudness scaling condition. The younger controls were found to rate most stimuli along the continuum as significantly louder than the older controls and the individuals with PD. Conclusions: The persons with PD showed evidence of impaired auditory perception for intensity information, as compared to the older and younger controls. The significant effect of aging on loudness perception may indicate peripheral and/or central auditory involvement.

  18. Altered cortical activity in prelingually deafened cochlear implant users following long periods of auditory deprivation.

    Science.gov (United States)

    Lammers, Marc J W; Versnel, Huib; van Zanten, Gijsbert A; Grolman, Wilko

    2015-02-01

    Auditory stimulation during childhood is critical for the development of the auditory cortex in humans and with that for hearing in adulthood. Age-related changes in morphology and peak latencies of the cortical auditory evoked potential (CAEP) have led to the use of this cortical response as a biomarker of auditory cortical maturation including studies of cortical development after deafness and subsequent cochlear implantation. To date, it is unknown whether prelingually deaf adults, with early onset deafness (before the age of 2 years) and who received a cochlear implant (CI) only during adulthood, would display absent or aberrant CAEP waveforms as predicted from CAEP studies in late implanted prelingually deaf children. In the current study, CAEP waveforms were recorded in response to electric stimuli in prelingually deaf adults, who received their CI after the age of 21 years. Waveform morphology and peak latencies were compared to the CAEP responses obtained in postlingually deaf adults, who became deaf after the age of 16. Unexpectedly, typical CAEP waveforms with adult-like P1-N1-P2 morphology could be recorded in the prelingually deaf adult CI users. On visual inspection, waveform morphology was comparable to the CAEP waveforms recorded in the postlingually deaf CI users. Interestingly, however, latencies of the N1 peak were significantly shorter and amplitudes were significantly larger in the prelingual group than in the postlingual group. The presence of the CAEP together with an early and large N1 peak might represent activation of the more innate and less complex components of the auditory cortex of the prelingually deaf CI user, whereas the CAEP in postlingually deaf CI users might reflect activation of the mature neural network still present in these patients. The CAEPs may therefore be helpful in the assessment of developmental state of the auditory cortex. PMID:25315357

  19. Hierarchical fringe tracking

    Science.gov (United States)

    Petrov, Romain G.; Elhalkouj, Thami; Boskri, Abdelkarim; Folcher, Jean-Pierre; Lagarde, Stéphane; Bresson, Yves; Benkhaldoun, Zouhair; Lazrek, Mohamed; Rakshit, Suvendu

    2014-07-01

    The limiting magnitude is a key issue for optical interferometry. Pairwise fringe trackers based on the integrated optics concepts used for example in GRAVITY seem limited to about K=10.5 with the 8m Unit Telescopes of the VLTI, and there is a general "common sense" statement that the efficiency of fringe tracking, and hence the sensitivity of optical interferometry, must decrease as the number of apertures increases, at least in the near infrared where we are still limited by detector readout noise. Here we present a Hierarchical Fringe Tracking (HFT) concept with sensitivity at least equal to this of a two apertures fringe trackers. HFT is based of the combination of the apertures in pairs, then in pairs of pairs then in pairs of groups… The key HFT module is a device that behaves like a spatial filter for two telescopes (2TSF) and transmits all or most of the flux of a cophased pair in a single mode beam. We give an example of such an achromatic 2TSF, based on very broadband dispersed fringes analyzed by grids, and show that it allows piston measures from very broadband fringes with only 3 to 5 pixels per fringe tracker. We show the results of numerical simulation indicating that our device is a good achromatic spatial filter and allowing a first evaluation of its coupling efficiency, which is similar to this of a single mode fiber on a single aperture. Our very preliminary results indicate that HFT has a good chance to be a serious candidate for the most sensitive fringe tracking with the VLTI and also interferometers with much larger number of apertures. On the VLTI the first rough estimate of the magnitude gain with regard to the GRAVITY internal FT is between 2.5 and 3.5 magnitudes in K, with a decisive impact on the VLTI science program for AGNs, Young stars and planet forming disks.

  20. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    visual bias with bimodal stimuli. Results highlight age-, memory-, and modality-dependent deterioration in the processing of auditory and visual space, as well as an age-related increase in the dominance of vision when localizing bimodal sources. PMID:23076429

  1. Recall and recognition hypermnesia for Socratic stimuli.

    Science.gov (United States)

    Kazén, Miguel; Solís-Macías, Víctor M

    2016-01-01

    In two experiments, we investigate hypermnesia, net memory improvements with repeated testing of the same material after a single study trial. In the first experiment, we found hypermnesia across three trials for the recall of word solutions to Socratic stimuli (dictionary-like definitions of concepts) replicating Erdelyi, Buschke, and Finkelstein and, for the first time using these materials, for their recognition. In the second experiment, we had two "yes/no" recognition groups, a Socratic stimuli group presented with concrete and abstract verbal materials and a word-only control group. Using signal detection measures, we found hypermnesia for concrete Socratic stimuli-and stable performance for abstract stimuli across three recognition tests. The control group showed memory decrements across tests. We interpret these findings with the alternative retrieval pathways (ARP) hypothesis, contrasting it with alternative theories of hypermnesia, such as depth of processing, generation and retrieve-recognise. We conclude that recognition hypermnesia for concrete Socratic stimuli is a reliable phenomenon, which we found in two experiments involving both forced-choice and yes/no recognition procedures. PMID:25523628

  2. Attentional Disengagement from Emotional Stimuli in Schizophrenia

    Science.gov (United States)

    Strauss, Gregory P.; Llerena, Katiah; Gold, James M.

    2011-01-01

    Previous research indicates that abnormal attention-emotion interactions are related to symptom presentation in individuals with schizophrenia. However, the individual components of attention responsible for this dysfunction are unclear. In the current study we examined the possibility that schizophrenia patients with higher levels of negative symptoms (HI-NEG: n = 14) have greater difficulty disengaging attention from unpleasant stimuli than patients with low negative symptoms (LOW-NEG: n = 18) or controls (CN: n = 27). Participants completed an exogenous emotional cueing task that required them to focus on an initial emotional or neutral cue and subsequently shift attention to a separate location outside of foveal vision to detect a target stimulus (letter). Results indicated that HI-NEG patients had greater difficulty disengaging attention from unpleasant stimuli than CN or LOW-NEG patients; however, behavioral performance did not differ among the groups for pleasant stimuli. Higher self-reported trait negative affect was also associated with greater difficulty disengaging attention from unpleasant stimuli. Abnormalities in disengaging attention from unpleasant stimuli may thus play a critical role in the formation and maintenance of both negative symptoms and trait negative affect in individuals with schizophrenia. PMID:21703824

  3. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  4. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  5. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Science.gov (United States)

    Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru

    2013-01-01

    Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873

  6. Corticofugal modulation of peripheral auditory responses

    Directory of Open Access Journals (Sweden)

    Paul Hinckley Delano

    2015-09-01

    Full Text Available The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body, inferior colliculus, cochlear nucleus and superior olivary complex reaching the cochlea through olivocochlear fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i colliculo-thalamic-cortico-collicular, (ii cortico-(collicular-olivocochlear and (iii cortico-(collicular-cochlear nucleus pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the olivocochlear reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on cochlear nucleus, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed.

  7. Corticofugal modulation of peripheral auditory responses.

    Science.gov (United States)

    Terreros, Gonzalo; Delano, Paul H

    2015-01-01

    The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647

  8. Visual hierarchical processing and lateralization of cognitive functions through domestic chicks' eyes.

    Directory of Open Access Journals (Sweden)

    Cinzia Chiandetti

    Full Text Available Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals.

  9. Visual hierarchical processing and lateralization of cognitive functions through domestic chicks' eyes.

    Science.gov (United States)

    Chiandetti, Cinzia; Pecchia, Tommaso; Patt, Francesco; Vallortigara, Giorgio

    2014-01-01

    Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the brain. Non-human animals have perceptual organization and functional lateralization that are comparable to that of humans. Here we trained the domestic chick in a concurrent discrimination task involving hierarchical stimuli. Then, we evaluated the animals for analytic and holistic encoding strategies in a series of transformational tests by relying on a monocular occlusion technique. A local precedence emerged in both the left and the right hemisphere, adding further evidence in favour of analytic processing in non-human animals. PMID:24404163

  10. Finding the missing-stimulus mismatch negativity (MMN) in early psychosis: Altered MMN to violations of an auditory gestalt

    OpenAIRE

    Rudolph, Erica D.; Ells, Emma M.L.; Campbell, Debra J.; Abriel, Shelagh C.; Tibbo, Philip G; Salisbury, Dean F.; Fisher, Derek J.

    2015-01-01

    The mismatch negativity (MMN) is an EEG-derived event-related potential (ERP) elicited by any violation of a predicted auditory ‘rule’, regardless of whether one is attending to the stimuli, and is thought to reflect updating of the stimulus context. Chronic schizophrenia patients exhibit robust MMN deficits, while MMN reduction in first-episode and early phase psychosis is significantly less consistent. Traditional two-tone “oddball” MMN measures of sensory information processing may be cons...

  11. Early Cross-Modal Interactions in Auditory and Visual Cortex Underlie a Sound-Induced Visual Illusion

    OpenAIRE

    Mishra, Jyoti; Martinez, Antigona; Sejnowski, Terrence J.; Hillyard, Steven A.

    2007-01-01

    When a single flash of light is presented interposed between two brief auditory stimuli separated by 60 –100 ms, subjects typically report perceiving two flashes (Shams et al., 2000, 2002). We investigated the timing and localization of the cortical processes that underlie this illusory flash effect in 34 subjects by means of 64-channel recordings of event-related potentials (ERPs). A difference ERP calculated to isolate neural activity associated with the illusory second flash revealed an ea...

  12. Prediction of hearing thresholds: Comparison of cortical evoked response audiometry and auditory steady state response audiometry techniques

    OpenAIRE

    Wong, LLN; Yeung, KNK

    2007-01-01

    The present study evaluated how well auditory steady state response (ASSR) and tone burst cortical evoked response audiometry (CERA) thresholds predict behavioral thresholds in the same participants. A total of 63 ears were evaluated. For ASSR testing, 100% amplitude modulated and 10% frequency modulated tone stimuli at a modulation frequency of 40Hz were used. Behavioral thresholds were closer to CERA thresholds than ASSR thresholds. ASSR and CERA thresholds were closer to behavioral thresho...

  13. Development of working memory, speech perception and auditory temporal resolution in children with allention deficit hyperactivity disorder and language impairment

    OpenAIRE

    Norrelgen, Fritjof

    2002-01-01

    Speech perception (SP), verbal working memory (WM) and auditory temporal resolution (ATR) have been studied in children with attention deficit hyperactivity disorder (ADHD) and language impairment (LI), as well as in reference groups of typically developed children. A computerised method was developed, in which discrimination of same or different pairs of stimuli was tested. In a functional Magnetic Resonance Imaging (fMRI) study a similar test was used to explore the neural...

  14. Verbal Establishing Stimuli: Testing the Motivative Effect of Stimuli in a Derived Relation with Consequences

    Science.gov (United States)

    Ju, Winifred C.; Hayes, Steven C.

    2008-01-01

    The present study examined whether the presentation of stimuli in equivalence relations with consequences increases the operant behavior that produces these consequences. In Experiment 1, both normal words and experimentally trained equivalence stimuli did so with young children. In Experiment 2, results were similar with college students. Here, a…

  15. Aerotactile Integration from Distal Skin Stimuli

    OpenAIRE

    Derrick, Donald; Gick, Bryan

    2013-01-01

    Tactile sensations at extreme distal body locations can integrate with auditory information to alter speech perception among uninformed and untrained listeners. Inaudible air puffs were applied to participants' ankles, simultaneously with audible syllables having aspirated and unaspirated stop onsets. Syllables heard simultaneously with air puffs were more likely to be heard as aspirated. These results demonstrate that event-appropriate information from distal parts of the body integrates in ...

  16. Test of a motor theory of long-term auditory memory.

    Science.gov (United States)

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719

  17. Disentangling Sub-Millisecond Processes within an Auditory Transduction Chain

    Directory of Open Access Journals (Sweden)

    Gollisch Tim

    2005-01-01

    Full Text Available Every sensation begins with the conversion of a sensory stimulus into the response of a receptor neuron. Typically, this involves a sequence of multiple biophysical processes that cannot all be monitored directly. In this work, we present an approach that is based on analyzing different stimuli that cause the same final output, here defined as the probability of the receptor neuron to fire a single action potential. Comparing such iso-response stimuli within the framework of nonlinear cascade models allows us to extract the characteristics of individual signal-processing steps with a temporal resolution much finer than the trial-to-trial variability of the measured output spike times. Applied to insect auditory receptor cells, the technique reveals the sub-millisecond dynamics of the eardrum vibration and of the electrical potential and yields a quantitative four-step cascade model. The model accounts for the tuning properties of this class of neurons and explains their high temporal resolution under natural stimulation. Owing to its simplicity and generality, the presented method is readily applicable to other nonlinear cascades and a large variety of signal-processing systems.

  18. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Directory of Open Access Journals (Sweden)

    Georg Berding

    Full Text Available Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation. The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus.

  19. Positron Emission Tomography Imaging Reveals Auditory and Frontal Cortical Regions Involved with Speech Perception and Loudness Adaptation.

    Science.gov (United States)

    Berding, Georg; Wilke, Florian; Rode, Thilo; Haense, Cathleen; Joseph, Gert; Meyer, Geerd J; Mamach, Martin; Lenarz, Minoo; Geworski, Lilli; Bengel, Frank M; Lenarz, Thomas; Lim, Hubert H

    2015-01-01

    Considerable progress has been made in the treatment of hearing loss with auditory implants. However, there are still many implanted patients that experience hearing deficiencies, such as limited speech understanding or vanishing perception with continuous stimulation (i.e., abnormal loudness adaptation). The present study aims to identify specific patterns of cerebral cortex activity involved with such deficiencies. We performed O-15-water positron emission tomography (PET) in patients implanted with electrodes within the cochlea, brainstem, or midbrain to investigate the pattern of cortical activation in response to speech or continuous multi-tone stimuli directly inputted into the implant processor that then delivered electrical patterns through those electrodes. Statistical parametric mapping was performed on a single subject basis. Better speech understanding was correlated with a larger extent of bilateral auditory cortex activation. In contrast to speech, the continuous multi-tone stimulus elicited mainly unilateral auditory cortical activity in which greater loudness adaptation corresponded to weaker activation and even deactivation. Interestingly, greater loudness adaptation was correlated with stronger activity within the ventral prefrontal cortex, which could be up-regulated to suppress the irrelevant or aberrant signals into the auditory cortex. The ability to detect these specific cortical patterns and differences across patients and stimuli demonstrates the potential for using PET to diagnose auditory function or dysfunction in implant patients, which in turn could guide the development of appropriate stimulation strategies for improving hearing rehabilitation. Beyond hearing restoration, our study also reveals a potential role of the frontal cortex in suppressing irrelevant or aberrant activity within the auditory cortex, and thus may be relevant for understanding and treating tinnitus. PMID:26046763

  20. Abnormal auditory forward masking pattern in the brainstem response of individuals with Asperger syndrome

    Directory of Open Access Journals (Sweden)

    Johan Källstrand

    2010-05-01

    Full Text Available Johan Källstrand1, Olle Olsson2, Sara Fristedt Nehlstedt1, Mia Ling Sköld1, Sören Nielzén21SensoDetect AB, Lund, Sweden; 2Department of Clinical Neuroscience, Section of Psychiatry, Lund University, Lund, SwedenAbstract: Abnormal auditory information processing has been reported in individuals with autism spectrum disorders (ASD. In the present study auditory processing was investigated by recording auditory brainstem responses (ABRs elicited by forward masking in adults diagnosed with Asperger syndrome (AS. Sixteen AS subjects were included in the forward masking experiment and compared to three control groups consisting of healthy individuals (n = 16, schizophrenic patients (n = 16 and attention deficit hyperactivity disorder patients (n = 16, respectively, of matching age and gender. The results showed that the AS subjects exhibited abnormally low activity in the early part of their ABRs that distinctly separated them from the three control groups. Specifically, wave III amplitudes were significantly lower in the AS group than for all the control groups in the forward masking condition (P < 0.005, which was not the case in the baseline condition. Thus, electrophysiological measurements of ABRs to complex sound stimuli (eg, forward masking may lead to a better understanding of the underlying neurophysiology of AS. Future studies may further point to specific ABR characteristics in AS individuals that separate them from individuals diagnosed with other neurodevelopmental diseases.Keywords: asperger syndrome, auditory brainstem response, forward masking, psychoacoustics

  1. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  2. Comparison of Auditory Event-Related Potential P300 in Sighted and Early Blind Individuals

    Directory of Open Access Journals (Sweden)

    Fatemeh Heidari

    2010-06-01

    Full Text Available Background and Aim: Following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In order to investigate this process, event-related potentials provide accurate information about time course neural activation as well as perception and cognitive processes. In this study, the latency and amplitude of auditory P300 were compared in sighted and early blind individuals in age range of 18-25 years old.Methods: In this cross-sectional study, auditory P300 potential was measured in conventional oddball paradigm by using two tone burst stimuli (1000 and 2000 Hz on 40 sighted subjects and 19 early blind subjects with mean age 20.94 years old.Results: The mean latency of P300 in early blind subjects was significantly smaller than sighted subjects (p=0.00.( There was no significant difference in amplitude between two groups (p>0.05.Conclusion: Reduced latency of P300 in early blind subjects in comparison to sighted subjects probably indicates the rate of automatic processing and information categorization is faster in early blind subjects because of sensory compensation. It seems that neural plasticity increases the rate of auditory processing and attention in early blind subjects.

  3. Formation and disruption of tonotopy in a large-scale model of the auditory cortex.

    Science.gov (United States)

    Tomková, Markéta; Tomek, Jakub; Novák, Ondřej; Zelenka, Ondřej; Syka, Josef; Brom, Cyril

    2015-10-01

    There is ample experimental evidence describing changes of tonotopic organisation in the auditory cortex due to environmental factors. In order to uncover the underlying mechanisms, we designed a large-scale computational model of the auditory cortex. The model has up to 100 000 Izhikevich's spiking neurons of 17 different types, almost 21 million synapses, which are evolved according to Spike-Timing-Dependent Plasticity (STDP) and have an architecture akin to existing observations. Validation of the model revealed alternating synchronised/desynchronised states and different modes of oscillatory activity. We provide insight into these phenomena via analysing the activity of neuronal subtypes and testing different causal interventions into the simulation. Our model is able to produce experimental predictions on a cell type basis. To study the influence of environmental factors on the tonotopy, different types of auditory stimulations during the evolution of the network were modelled and compared. We found that strong white noise resulted in completely disrupted tonotopy, which is consistent with in vivo experimental observations. Stimulation with pure tones or spontaneous activity led to a similar degree of tonotopy as in the initial state of the network. Interestingly, weak white noise led to a substantial increase in tonotopy. As the STDP was the only mechanism of plasticity in our model, our results suggest that STDP is a sufficient condition for the emergence and disruption of tonotopy under various types of stimuli. The presented large-scale model of the auditory cortex and the core simulator, SUSNOIMAC, have been made publicly available. PMID:26344164

  4. Visual Hierarchical Processing and Lateralization of Cognitive Functions through Domestic Chicks' Eyes

    OpenAIRE

    Chiandetti, Cinzia; Pecchia, Tommaso; Patt, Francesco; Vallortigara, Giorgio

    2014-01-01

    Hierarchical stimuli have proven effective for investigating principles of visual organization in humans. A large body of evidence suggests that the analysis of the global forms precedes the analysis of the local forms in our species. Studies on lateralization also indicate that analytic and holistic encoding strategies are separated between the two hemispheres of the brain. This raises the question of whether precedence effects may reflect the activation of lateralized functions within the b...

  5. Stimuli responsive nanomaterials for controlled release applications

    KAUST Repository

    Li, Song

    2012-01-01

    The controlled release of therapeutics has been one of the major challenges for scientists and engineers during the past three decades. Coupled with excellent biocompatibility profiles, various nanomaterials have showed great promise for biomedical applications. Stimuli-responsive nanomaterials guarantee the controlled release of cargo to a given location, at a specific time, and with an accurate amount. In this review, we have combined the major stimuli that are currently used to achieve the ultimate goal of controlled and targeted release by "smart" nanomaterials. The most heavily explored strategies include (1) pH, (2) enzymes, (3) redox, (4) magnetic, and (5) light-triggered release.

  6. Increasing Working Memory Load Reduces Processing of Cross-Modal Task-Irrelevant Stimuli Even after Controlling for Task Difficulty and Executive Capacity

    Science.gov (United States)

    Simon, Sharon S.; Tusch, Erich S.; Holcomb, Phillip J.; Daffner, Kirk R.

    2016-01-01

    The classic account of the load theory (LT) of attention suggests that increasing cognitive load leads to greater processing of task-irrelevant stimuli due to competition for limited executive resource that reduces the ability to actively maintain current processing priorities. Studies testing this hypothesis have yielded widely divergent outcomes. The inconsistent results may, in part, be related to variability in executive capacity (EC) and task difficulty across subjects in different studies. Here, we used a cross-modal paradigm to investigate whether augmented working memory (WM) load leads to increased early distracter processing, and controlled for the potential confounders of EC and task difficulty. Twenty-three young subjects were engaged in a primary visual WM task, under high and low load conditions, while instructed to ignore irrelevant auditory stimuli. Demands of the high load condition were individually titrated to make task difficulty comparable across subjects with differing EC. Event-related potentials (ERPs) were used to measure neural activity in response to stimuli presented in both the task relevant modality (visual) and task-irrelevant modality (auditory). Behavioral results indicate that the load manipulation and titration procedure of the primary visual task were successful. ERPs demonstrated that in response to visual target stimuli, there was a load-related increase in the posterior slow wave, an index of sustained attention and effort. Importantly, under high load, there was a decrease of the auditory N1 in response to distracters, a marker of early auditory processing. These results suggest that increased WM load is associated with enhanced attentional engagement and protection from distraction in a cross-modal setting, even after controlling for task difficulty and EC. Our findings challenge the classic LT and offer support for alternative models.

  7. Hierarchical topic modeling with nested hierarchical Dirichlet process

    Institute of Scientific and Technical Information of China (English)

    Yi-qun DING; Shan-ping LI; Zhen ZHANG; Bin SHEN

    2009-01-01

    This paper deals with the statistical modeling of latent topic hierarchies in text corpora. The height of the topic tree is assumed as fixed, while the number of topics on each level as unknown a priori and to be inferred from data. Taking a nonparametric Bayesian approach to this problem, we propose a new probabilistic generative model based on the nested hierarchical Dirichlet process (nHDP) and present a Markov chain Monte Carlo sampling algorithm for the inference of the topic tree structure as welt as the word distribution of each topic and topic distribution of each document. Our theoretical analysis and experiment results show that this model can produce a more compact hierarchical topic structure and captures more free-grained topic relationships compared to the hierarchical latent Dirichlet allocation model.

  8. Auditory hallucinations suppressed by etizolam in a patient with schizophrenia.

    Science.gov (United States)

    Benazzi, F; Mazzoli, M; Rossi, E

    1993-10-01

    A patient presented with a 15 year history of schizophrenia with auditory hallucinations. Though unresponsive to prolonged trials of neuroleptics, the auditory hallucinations disappeared with etizolam. PMID:7902201

  9. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  10. Differential maturation of brain signal complexity in the human auditory and visual system

    Directory of Open Access Journals (Sweden)

    Sarah Lippe

    2009-11-01

    Full Text Available Brain development carries with it a large number of structural changes at the local level which impact on the functional interactions of distributed neuronal networks for perceptual processing. Such changes enhance information processing capacity, which can be indexed by estimation of neural signal complexity. Here, we show that during development, EEG signal complexity increases from one month to 5 years of age in response to auditory and visual stimulation. However, the rates of change in complexity were not equivalent for the two responses. Infants’ signal complexity for the visual condition was greater than auditory signal complexity, whereas adults showed the same level of complexity to both types of stimuli. The differential rates of complexity change may reflect a combination of innate and experiential factors on the structure and function of the two sensory systems.

  11. The Effect of Objective Room Acoustic Parameters on Auditory Steady-State Responses

    DEFF Research Database (Denmark)

    Zapata Rodriguez, Valentina; M. Harte, James; Jeong, Cheol-Ho; Brunskog, Jonas

    2016-01-01

    Verification that Hearing Aids (HA) have been fitted correctly in pre-lingual infants and hard-to-test adults is an important emerging application in technical audiology. These test subjects are unable to undergo reliable behavioral testing, so an objective method is required. Auditory steady-state...... responses (ASSR), recorded in a sound field is a promising technology to verify the hearing aid fitting. The test involves the presentation of the auditory stimuli via a loudspeaker, unlike the usual procedure of delivering via insert earphones. Room reverberation clearly may significantly affect the...... features of the stimulus important for eliciting a strong electrophysiological response, and thus complicate its detection. This study investigates the effect of different room acoustic conditions on recorded ASSRs via an auralisation approach using insert earphones. Fifteen normal-hearing listeners were...

  12. Intersubject information mapping: revealing canonical representations of complex natural stimuli

    Directory of Open Access Journals (Sweden)

    Nikolaus Kriegeskorte

    2015-03-01

    Full Text Available Real-world time-continuous stimuli such as video promise greater naturalism for studies of brain function. However, modeling the stimulus variation is challenging and introduces a bias in favor of particular descriptive dimensions. Alternatively, we can look for brain regions whose signal is correlated between subjects, essentially using one subject to model another. Intersubject correlation mapping (ICM allows us to find brain regions driven in a canonical manner across subjects by a complex natural stimulus. However, it requires a direct voxel-to-voxel match between the spatiotemporal activity patterns and is thus only sensitive to common activations sufficiently extended to match up in Talairach space (or in an alternative, e.g. cortical-surface-based, common brain space. Here we introduce the more general approach of intersubject information mapping (IIM. For each brain region, IIM determines how much information is shared between the subjects' local spatiotemporal activity patterns. We estimate the intersubject mutual information using canonical correlation analysis applied to voxels within a spherical searchlight centered on each voxel in turn. The intersubject information estimate is invariant to linear transforms including spatial rearrangement of the voxels within the searchlight. This invariance to local encoding will be crucial in exploring fine-grained brain representations, which cannot be matched up in a common space and, more fundamentally, might be unique to each individual – like fingerprints. IIM yields a continuous brain map, which reflects intersubject information in fine-grained patterns. Performed on data from functional magnetic resonance imaging (fMRI of subjects viewing the same television show, IIM and ICM both highlighted sensory representations, including primary visual and auditory cortices. However, IIM revealed additional regions in higher association cortices, namely temporal pole and orbitofrontal cortex. These

  13. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  14. Auditory brainstem response in dolphins.

    OpenAIRE

    Ridgway, S. H.; Bullock, T H; Carder, D.A.; Seeley, R L; Woods, D.; Galambos, R

    1981-01-01

    We recorded the auditory brainstem response (ABR) in four dolphins (Tursiops truncatus and Delphinus delphis). The ABR evoked by clicks consists of seven waves within 10 msec; two waves often contain dual peaks. The main waves can be identified with those of humans and laboratory mammals; in spite of a much longer path, the latencies of the peaks are almost identical to those of the rat. The dolphin ABR waves increase in latency as the intensity of a sound decreases by only 4 microseconds/dec...

  15. Categorization of Multidimensional Stimuli by Pigeons

    Science.gov (United States)

    Berg, Mark E.; Grace, Randolph C.

    2011-01-01

    Six pigeons responded in a visual category learning task in which the stimuli were dimensionally separable Gabor patches that varied in frequency and orientation. We compared performance in two conditions which varied in terms of whether accurate performance required that responding be controlled jointly by frequency and orientation, or…

  16. Stimuli-responsive polymers for nanobiotechnologies

    Czech Academy of Sciences Publication Activity Database

    Horák, Daniel; Rittich, B.; Španová, A.

    Nagaoka: Nagaoka University of Technology , 2008, s. 71-74. [International Symposium: Global Renaissance by Green Energy Revolution /8./. Nagaoka (JP), 22.01.2008-23.01.2008] R&D Projects: GA MŠk 2B06053 Institutional research plan: CEZ:AV0Z40500505 Keywords : stimuli-responsive * nanobiotechnologies * microspheres Subject RIV: CD - Macromolecular Chemistry

  17. Musicians' Perception of Beat in Monotonic Stimuli.

    Science.gov (United States)

    Duke, Robert A.

    1989-01-01

    Assesses musicians' perceptions of beat in monotonic stimuli and attempts to define empirically the range of perceived beat tempo in music. Subjects performed a metric pulse in response to periodic stimulus tones. Results indicate a relatively narrow range within which beats are perceived by trained musicians. (LS)

  18. Cortical Gating of Oropharyngeal Sensory Stimuli

    Directory of Open Access Journals (Sweden)

    KarenWheeler-Hegland

    2011-01-01

    Full Text Available Somatosensory evoked potentials provide a measure of cortical neuronal activation in response to various types of sensory stimuli. In order to prevent flooding of the cortex with redundant information various sensory stimuli are gated cortically such that response to stimulus 2 (S2 is significantly reduced in amplitude compared to stimulus 1 (S1. Upper airway protective mechanisms, such as swallowing and cough, are dependent on sensory input for triggering and modifying their motor output. Thus, it was hypothesized that central neural gating would be absent for paired air puff stimuli applied to the oropharynx. Twenty-three healthy adults (18-35 years served as research participants. Pharyngeal sensory evoked potentials (PSEPs were measured via 32 electrode cap (10-20 system connected to SynAmps2 Neuroscan EEG System. Paired-pulse air puffs were delivered with an inter stimulus interval of 500ms to the oropharynx using a thin polyethylene tube connected to a flexible laryngoscope. Data were analyzed using descriptive statistics and a repeated measures analysis of variance. There were no significant differences found for the amplitudes S1 and S2 for any of the 4 component PSEP peaks. Mean gating ratios were above 0.90 for each peak. Results supports our hypothesis that sensory central neural gating would be absent for component PSEP peaks with paired-pulse stimuli delivered to the oropharynx. This may be related to the need for constant sensory monitoring necessary for adequate airway protection associated with swallowing and coughing.

  19. Cyclodextrin-Mediated Hierarchical Self-Assembly and Its Potential in Drug Delivery Applications.

    Science.gov (United States)

    Antoniuk, Iurii; Amiel, Catherine

    2016-09-01

    Hierarchical self-assembly exploits various non-covalent interactions to manufacture sophisticated organized systems at multiple length scales with interesting properties for pharmaceutical industry such as possibility of spatially controlled drug loading and multiresponsiveness to external stimuli. Cyclodextrin (CD)-mediated host-guest interactions proved to be an efficient tool to construct hierarchical architectures primarily due to the high specificity and reversibility of the inclusion complexation of CDs with a number of hydrophobic guest molecules, their excellent bioavailability, and easiness of chemical modification. In this review, we will outline the recent progress in the development of CD-based hierarchical architectures such as nanoscale drug and gene delivery carriers and physically cross-linked supramolecular hydrogels designed for a sustained release of actives. PMID:27342436

  20. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  1. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback

    Directory of Open Access Journals (Sweden)

    Larson Charles R

    2011-06-01

    Full Text Available Abstract Background The motor-driven predictions about expected sensory feedback (efference copies have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs were recorded in response to upward pitch shift stimuli (PSS with five different magnitudes (0, +50, +100, +200 and +400 cents at voice onset during active vocal production and passive listening to the playback. Results Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents, became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Conclusions Findings of the present study suggest that the brain utilizes the motor predictions (efference copies to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  2. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587

  3. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  4. Loudness function derives from data on electrical discharge rates in auditory nerve fibers

    Science.gov (United States)

    Howes, W. L.

    1973-01-01

    Judgements of the loudness of pure-tone sound stimuli yield a loudness function which relates perceived loudness to stimulus amplitude. A loudness function is derived from physical evidence alone without regard to human judgments. The resultant loudness function is L=K(q-q0), where L is loudness, q is effective sound pressure (specifically q0 at the loudness threshold), and K is generally a weak function of the number of stimulated auditory nerve fibers. The predicted function is in agreement with loudness judgment data reported by Warren, which imply that, in the suprathreshold loudness regime, decreasing the sound-pressure level by 6 db results in halving the loudness.

  5. Visual, Auditory, and Cross Modal Sensory Processing in Adults with Autism:An EEG Power and BOLD fMRI Investigation

    Directory of Open Access Journals (Sweden)

    Elizabeth C Hames

    2016-04-01

    Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2­VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.

  6. Temporal correlation between auditory neurons and the hippocampal theta rhythm induced by novel stimulations in awake guinea pigs.

    Science.gov (United States)

    Liberman, Tamara; Velluti, Ricardo A; Pedemonte, Marisa

    2009-11-17

    The hippocampal theta rhythm is associated with the processing of sensory systems such as touch, smell, vision and hearing, as well as with motor activity, the modulation of autonomic processes such as cardiac rhythm, and learning and memory processes. The discovery of temporal correlation (phase locking) between the theta rhythm and both visual and auditory neuronal activity has led us to postulate the participation of such rhythm in the temporal processing of sensory information. In addition, changes in attention can modify both the theta rhythm and the auditory and visual sensory activity. The present report tested the hypothesis that the temporal correlation between auditory neuronal discharges in the inferior colliculus central nucleus (ICc) and the hippocampal theta rhythm could be enhanced by changes in sensory stimulation. We presented chronically implanted guinea pigs with auditory stimuli that varied over time, and recorded the auditory response during wakefulness. It was observed that the stimulation shifts were capable of producing the temporal phase correlations between the theta rhythm and the ICc unit firing, and they differed depending on the stimulus change performed. Such correlations disappeared approximately 6 s after the change presentation. Furthermore, the power of the hippocampal theta rhythm increased in half of the cases presented with a stimulation change. Based on these data, we propose that the degree of correlation between the unitary activity and the hippocampal theta rhythm varies with--and therefore may signal--stimulus novelty. PMID:19716364

  7. Electrophysiological mismatch response recorded in awake pigeons from the avian functional equivalent of the primary auditory cortex.

    Science.gov (United States)

    Schall, Ulrich; Müller, Bernhard W; Kärgel, Christian; Güntürkün, Onur

    2015-03-25

    The neural response to occasional variations in acoustic stimuli in a regular sequence of sounds generates an N-methyl-D-aspartate receptor-modulated event-related potential in primates and rodents in the primary auditory cortex known as mismatch negativity (MMN). The current study investigated MMN in pigeons (Columba livia L) through intracranial recordings from Field L of the caudomedial nidopallium, the avian functional equivalent of the mammalian primary auditory cortex. Auditory evoked field potentials were recorded from awake birds using a low-frequency (800 Hz) and high-frequency (1400 Hz) deviant auditory oddball procedure with deviant-as-standard (flip-flop design) and multiple-standard control conditions. An MMN-like field potential was recorded and blocked with systemic 5 mg/kg ketamine administration. Our results are similar to human and rodent findings of an MMN-like event-related potential in birds suggestive of similar auditory sensory memory mechanisms in birds and mammals that are homologue from a common ancestor 300 million years ago or resulted from convergent evolution. PMID:25646582

  8. Effect of Auditory Constraints on Motor Learning Depends on Stage of Recovery Post Stroke

    Directory of Open Access Journals (Sweden)

    Viswanath eAluru

    2014-06-01

    Full Text Available In order to develop evidence-based rehabilitation protocols post stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in twenty subjects with chronic hemiparesis, and used a bimanual wrist extension task using a custom-made wrist trainer to facilitate learning of wrist extension in the paretic hand under four auditory conditions: 1 without auditory cueing; 2 to non-musical happy sounds; 3 to self-selected music; and 4 to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post stroke.

  9. Effect of auditory constraints on motor performance depends on stage of recovery post-stroke.

    Science.gov (United States)

    Aluru, Viswanath; Lu, Ying; Leung, Alan; Verghese, Joe; Raghavan, Preeti

    2014-01-01

    In order to develop evidence-based rehabilitation protocols post-stroke, one must first reconcile the vast heterogeneity in the post-stroke population and develop protocols to facilitate motor learning in the various subgroups. The main purpose of this study is to show that auditory constraints interact with the stage of recovery post-stroke to influence motor learning. We characterized the stages of upper limb recovery using task-based kinematic measures in 20 subjects with chronic hemiparesis. We used a bimanual wrist extension task, performed with a custom-made wrist trainer, to facilitate learning of wrist extension in the paretic hand under four auditory conditions: (1) without auditory cueing; (2) to non-musical happy sounds; (3) to self-selected music; and (4) to a metronome beat set at a comfortable tempo. Two bimanual trials (15 s each) were followed by one unimanual trial with the paretic hand over six cycles under each condition. Clinical metrics, wrist and arm kinematics, and electromyographic activity were recorded. Hierarchical cluster analysis with the Mahalanobis metric based on baseline speed and extent of wrist movement stratified subjects into three distinct groups, which reflected their stage of recovery: spastic paresis, spastic co-contraction, and minimal paresis. In spastic paresis, the metronome beat increased wrist extension, but also increased muscle co-activation across the wrist. In contrast, in spastic co-contraction, no auditory stimulation increased wrist extension and reduced co-activation. In minimal paresis, wrist extension did not improve under any condition. The results suggest that auditory task constraints interact with stage of recovery during motor learning after stroke, perhaps due to recruitment of distinct neural substrates over the course of recovery. The findings advance our understanding of the mechanisms of progression of motor recovery and lay the foundation for personalized treatment algorithms post-stroke. PMID

  10. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation. PMID:21413843

  11. The temporal primacy of self-related stimuli and negative stimuli: an ERP-based comparative study.

    Science.gov (United States)

    Zhu, Min; Luo, Junlong; Zhao, Na; Hu, Yinying; Yan, Lingyue; Gao, Xiangping

    2016-10-01

    Numerous studies have shown there exist attention biases for self-related and negative stimuli. Few studies, however, have been carried out to compare the effects of such stimuli on the neural mechanisms of early attentional alertness and subsequent cognitive processing. The purpose of the present study was to examine the temporal primacy of both self-related stimuli and negative stimuli in the neurophysiologic level. In a modified oddball task, event-related potentials of the deviant stimuli (i.e., self-face, negative face and neutral face) were recorded. Results revealed that larger P2 amplitudes were elicited by self-related and negative stimuli than by neutral stimuli. Negative stimuli, however, elicited shorter P2 latencies than self-related and neutral stimuli. As for the N2 component, self-related and negative stimuli elicited smaller amplitudes and shorter latencies than neutral stimuli, but otherwise did not differ. Self-related stimuli also elicited larger P3 and late positive component (LPC) amplitudes than negative and neutral stimuli. The pattern of results suggests that the primacy of negative stimuli occurred at an early attention stage of processing, while the primacy of self-related stimuli occurred at the subsequent cognitive evaluation and memory stage. PMID:26513485

  12. From sounds to words: a neurocomputational model of adaptation, inhibition and memory processes in auditory change detection.

    Science.gov (United States)

    Garagnani, Max; Pulvermüller, Friedemann

    2011-01-01

    Most animals detect sudden changes in trains of repeated stimuli but only some can learn a wide range of sensory patterns and recognise them later, a skill crucial for the evolutionary success of higher mammals. Here we use a neural model mimicking the cortical anatomy of sensory and motor areas and their connections to explain brain activity indexing auditory change and memory access. Our simulations indicate that while neuronal adaptation and local inhibition of cortical activity can explain aspects of change detection as observed when a repeated unfamiliar sound changes in frequency, the brain dynamics elicited by auditory stimulation with well-known patterns (such as meaningful words) cannot be accounted for on the basis of adaptation and inhibition alone. Specifically, we show that the stronger brain responses observed to familiar stimuli in passive oddball tasks are best explained in terms of activation of memory circuits that emerged in the cortex during the learning of these stimuli. Such memory circuits, and the activation enhancement they entail, are absent for unfamiliar stimuli. The model illustrates how basic neurobiological mechanisms, including neuronal adaptation, lateral inhibition, and Hebbian learning, underlie neuronal assembly formation and dynamics, and differentially contribute to the brain's major change detection response, the mismatch negativity. PMID:20728545

  13. Chemical evolution in hierarchical scenarios

    OpenAIRE

    Tissera P.B.

    2012-01-01

    We studied the chemical properties of Milky-Way mass galaxies. We found common global chemical patterns with particularities which reflect their different assembly histories in a hierarchical scenario. We carried out a comprehensively analysis of the dynamical components (central spheroid, disc, inner and outer haloes) and their chemical properties.

  14. Hierarchical Microaggressions in Higher Education

    Science.gov (United States)

    Young, Kathryn; Anderson, Myron; Stewart, Saran

    2015-01-01

    Although there has been substantial research examining the effects of microaggressions in the public sphere, there has been little research that examines microaggressions in the workplace. This study explores the types of microaggressions that affect employees at universities. We coin the term "hierarchical microaggression" to represent…

  15. Hierarchical classification of social groups

    OpenAIRE

    Витковская, Мария

    2001-01-01

    Classification problems are important for every science, and for sociology as well. Social phenomena, examined from the aspect of classification of social groups, can be examined deeper. At present one common classification of groups does not exist. This article offers the hierarchical classification of social group.

  16. Stimulus-invariant processing and spectrotemporal reverse correlation in primary auditory cortex.

    Science.gov (United States)

    Klein, David J; Simon, Jonathan Z; Depireux, Didier A; Shamma, Shihab A

    2006-04-01

    The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuli-such as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs)-are very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in AI. PMID:16518572

  17. Analytical Evaluation of Hierarchical Planning Systems

    OpenAIRE

    Dempster, M.A.H.; Fisher, M.L.; Jansen, L; Lageweg, B.J.; J. K. Lenstra; Rinnooy Kan, A.H.G.

    1984-01-01

    Hierarchical planning systems have become popular for multilevel decision problems. After reviewing the concept of hierarchical planning and citing some examples, the authors describe a method for analytic evaluation of a hierarchical planning system. They show that multilevel decision problems can be nicely modeled as multistage stochastic programs. Then any hierarchical planning system can be measured against the yardstick of optimality in this stochastic program. They demonstrate this ap...

  18. A Comparison of Three Auditory Discrimination-Perception Tests

    Science.gov (United States)

    Koenke, Karl

    1978-01-01

    Comparisons were made between scores of 52 third graders on three measures of auditory discrimination: Wepman's Auditory Discrimination Test, the Goldman-Fristoe Woodcock (GFW) Test of Auditory Discrimination, and the Kimmell-Wahl Screening Test of Auditory Perception (STAP). (CL)

  19. Utilising reinforcement learning to develop strategies for driving auditory neural implants

    Science.gov (United States)

    Lee, Geoffrey W.; Zambetta, Fabio; Li, Xiaodong; Paolini, Antonio G.

    2016-08-01

    Objective. In this paper we propose a novel application of reinforcement learning to the area of auditory neural stimulation. We aim to develop a simulation environment which is based off real neurological responses to auditory and electrical stimulation in the cochlear nucleus (CN) and inferior colliculus (IC) of an animal model. Using this simulator we implement closed loop reinforcement learning algorithms to determine which methods are most effective at learning effective acoustic neural stimulation strategies. Approach. By recording a comprehensive set of acoustic frequency presentations and neural responses from a set of animals we created a large database of neural responses to acoustic stimulation. Extensive electrical stimulation in the CN and the recording of neural responses in the IC provides a mapping of how the auditory system responds to electrical stimuli. The combined dataset is used as the foundation for the simulator, which is used to implement and test learning algorithms. Main results. Reinforcement learning, utilising a modified n-Armed Bandit solution, is implemented to demonstrate the model’s function. We show the ability to effectively learn stimulation patterns which mimic the cochlea’s ability to covert acoustic frequencies to neural activity. Time taken to learn effective replication using neural stimulation takes less than 20 min under continuous testing. Significance. These results show the utility of reinforcement learning in the field of neural stimulation. These results can be coupled with existing sound processing technologies to develop new auditory prosthetics that are adaptable to the recipients current auditory pathway. The same process can theoretically be abstracted to other sensory and motor systems to develop similar electrical replication of neural signals.

  20. Neural correlates of auditory-cognitive processing in older adult cochlear implant recipients.

    Science.gov (United States)

    Henkin, Yael; Yaar-Soffer, Yifat; Steinberg, Meidan; Muchnik, Chava

    2014-01-01

    With the growing number of older adults receiving cochlear implants (CI), there is general agreement that substantial benefits can be gained. Nonetheless, variability in speech perception performance is high, and the relative contribution and interactions among peripheral, central-auditory, and cognitive factors are not fully understood. The goal of the present study was to compare auditory-cognitive processing in older-adult CI recipients with that of older normal-hearing (NH) listeners by means of behavioral and electrophysiologic manifestations of a high-load cognitive task. Auditory event-related potentials (AERPs) were recorded from 9 older postlingually deafened adults with CI (age at CI >60) and 10 age-matched listeners with NH, while performing an auditory Stroop task. Participants were required to classify the speaker's gender (male/female) that produced the words 'mother' or 'father' while ignoring the irrelevant congruent or incongruent word meaning. Older CI and NH listeners exhibited comparable reaction time, performance accuracy, and initial sensory-perceptual processing (i.e. N1 potential). Nonetheless, older CI recipients showed substantially prolonged and less efficient perceptual processing (i.e. P3 potential). Congruency effects manifested in longer reaction time (i.e. Stroop effect), execution time, and P3 latency to incongruent versus congruent stimuli in both groups in a similar fashion; however, markedly prolonged P3 and shortened execution time were evident in older CI recipients. Collectively, older adults (CI and NH) employed a combined perceptual and postperceptual conflict processing strategy; nonetheless, the relative allotment of perceptual resources was substantially enhanced to maintain adequate performance in CI recipients. In sum, the recording of AERPs together with the simultaneously obtained behavioral measures during a Stroop task exposed a differential time course of auditory-cognitive processing in older CI recipients that

  1. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  2. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment.

    Science.gov (United States)

    Frtusova, Jana B; Phillips, Natalie A

    2016-01-01

    This study examined the effect of auditory-visual (AV) speech stimuli on working memory in older adults with poorer-hearing (PH) in comparison to age- and education-matched older adults with better hearing (BH). Participants completed a working memory n-back task (0- to 2-back) in which sequences of digits were presented in visual-only (i.e., speech-reading), auditory-only (A-only), and AV conditions. Auditory event-related potentials (ERP) were collected to assess the relationship between perceptual and working memory processing. The behavioral results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the PH group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the PH group showed a more robust AV benefit; however, the BH group showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the PH group to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed. PMID:27148106

  3. The Auditory-Visual Speech Benefit on Working Memory in Older Adults with Hearing Impairment

    Directory of Open Access Journals (Sweden)

    Jana B. Frtusova

    2016-04-01

    Full Text Available This study examined the effect of auditory-visual (AV speech stimuli on working memory in hearing impaired participants (HIP in comparison to age- and education-matched normal elderly controls (NEC. Participants completed a working memory n-back task (0- to 2-back in which sequences of digits were presented in visual-only (i.e., speech-reading, auditory-only (A-only, and AV conditions. Auditory event-related potentials (ERP were collected to assess the relationship between perceptual and working memory processing. The behavioural results showed that both groups were faster in the AV condition in comparison to the unisensory conditions. The ERP data showed perceptual facilitation in the AV condition, in the form of reduced amplitudes and latencies of the auditory N1 and/or P1 components, in the HIP group. Furthermore, a working memory ERP component, the P3, peaked earlier for both groups in the AV condition compared to the A-only condition. In general, the HIP group showed a more robust AV benefit; however, the NECs showed a dose-response relationship between perceptual facilitation and working memory improvement, especially for facilitation of processing speed. Two measures, reaction time and P3 amplitude, suggested that the presence of visual speech cues may have helped the HIP to counteract the demanding auditory processing, to the level that no group differences were evident during the AV modality despite lower performance during the A-only condition. Overall, this study provides support for the theory of an integrated perceptual-cognitive system. The practical significance of these findings is also discussed.

  4. Direct recordings from the auditory cortex in a cochlear implant user.

    Science.gov (United States)

    Nourski, Kirill V; Etler, Christine P; Brugge, John F; Oya, Hiroyuki; Kawasaki, Hiroto; Reale, Richard A; Abbas, Paul J; Brown, Carolyn J; Howard, Matthew A

    2013-06-01

    Electrical stimulation of the auditory nerve with a cochlear implant (CI) is the method of choice for treatment of severe-to-profound hearing loss. Understanding how the human auditory cortex responds to CI stimulation is important for advances in stimulation paradigms and rehabilitation strategies. In this study, auditory cortical responses to CI stimulation were recorded intracranially in a neurosurgical patient to examine directly the functional organization of the auditory cortex and compare the findings with those obtained in normal-hearing subjects. The subject was a bilateral CI user with a 20-year history of deafness and refractory epilepsy. As part of the epilepsy treatment, a subdural grid electrode was implanted over the left temporal lobe. Pure tones, click trains, sinusoidal amplitude-modulated noise, and speech were presented via the auxiliary input of the right CI speech processor. Additional experiments were conducted with bilateral CI stimulation. Auditory event-related changes in cortical activity, characterized by the averaged evoked potential and event-related band power, were localized to posterolateral superior temporal gyrus. Responses were stable across recording sessions and were abolished under general anesthesia. Response latency decreased and magnitude increased with increasing stimulus level. More apical intracochlear stimulation yielded the largest responses. Cortical evoked potentials were phase-locked to the temporal modulations of periodic stimuli and speech utterances. Bilateral electrical stimulation resulted in minimal artifact contamination. This study demonstrates the feasibility of intracranial electrophysiological recordings of responses to CI stimulation in a human subject, shows that cortical response properties may be similar to those obtained in normal-hearing individuals, and provides a basis for future comparisons with extracranial recordings. PMID:23519390

  5. Image Information Mining Utilizing Hierarchical Segmentation

    Science.gov (United States)

    Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai

    2002-01-01

    The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.

  6. Auditory Efferent System Modulates Mosquito Hearing.

    Science.gov (United States)

    Andrés, Marta; Seifert, Marvin; Spalthoff, Christian; Warren, Ben; Weiss, Lukas; Giraldo, Diego; Winkler, Margret; Pauls, Stephanie; Göpfert, Martin C

    2016-08-01

    The performance of vertebrate ears is controlled by auditory efferents that originate in the brain and innervate the ear, synapsing onto hair cell somata and auditory afferent fibers [1-3]. Efferent activity can provide protection from noise and facilitate the detection and discrimination of sound by modulating mechanical amplification by hair cells and transmitter release as well as auditory afferent action potential firing [1-3]. Insect auditory organs are thought to lack efferent control [4-7], but when we inspected mosquito ears, we obtained evidence for its existence. Antibodies against synaptic proteins recognized rows of bouton-like puncta running along the dendrites and axons of mosquito auditory sensory neurons. Electron microscopy identified synaptic and non-synaptic sites of vesicle release, and some of the innervating fibers co-labeled with somata in the CNS. Octopamine, GABA, and serotonin were identified as efferent neurotransmitters or neuromodulators that affect auditory frequency tuning, mechanical amplification, and sound-evoked potentials. Mosquito brains thus modulate mosquito ears, extending the use of auditory efferent systems from vertebrates to invertebrates and adding new levels of complexity to mosquito sound detection and communication. PMID:27476597

  7. Photonic water dynamically responsive to external stimuli.

    Science.gov (United States)

    Sano, Koki; Kim, Youn Soo; Ishida, Yasuhiro; Ebina, Yasuo; Sasaki, Takayoshi; Hikima, Takaaki; Aida, Takuzo

    2016-01-01

    Fluids that contain ordered nanostructures with periodic distances in the visible-wavelength range, anomalously exhibit structural colours that can be rapidly modulated by external stimuli. Indeed, some fish can dynamically change colour by modulating the periodic distance of crystalline guanine sheets cofacially oriented in their fluid cytoplasm. Here we report that a dilute aqueous colloidal dispersion of negatively charged titanate nanosheets exhibits structural colours. In this 'photonic water', the nanosheets spontaneously adopt a cofacial geometry with an ultralong periodic distance of up to 675 nm due to a strong electrostatic repulsion. Consequently, the photonic water can even reflect near-infrared light up to 1,750 nm. The structural colour becomes more vivid in a magnetic flux that induces monodomain structural ordering of the colloidal dispersion. The reflective colour of the photonic water can be modulated over the entire visible region in response to appropriate physical or chemical stimuli. PMID:27572806

  8. Preparation of stimuli for timbre perception studies.

    Science.gov (United States)

    Labuschagne, Ilse B; Hanekom, Johan J

    2013-09-01

    Stimuli used in timbre perception studies must be controlled carefully in order to yield meaningful results. During psychoacoustic testing of individual timbre properties, (1) it must be ensured that timbre properties do not co-vary, as timbre properties are often not independent from one another, and (2) the potential influence of loudness, pitch, and perceived duration must be eliminated. A mathematical additive synthesis method is proposed which allows complete control over two spectral parameters, the spectral centroid (corresponding to brightness) and irregularity, and two temporal parameters, log rise-time (LRT) and a parameter characterizing the sustain/decay segment, while controlling for covariation in the spectral centroid and irregularity. Thirteen musical instrument sounds were synthesized. Perceptual data from six listeners indicate that variation in the four timbre properties mainly influences loudness and that perceived duration and pitch are not influenced significantly for the stimuli of longer duration (2 s) used here. Trends across instruments were found to be similar. PMID:23967955

  9. Source Memory for Mental Imagery: Influences of the Stimuli's Ease of Imagery.

    Directory of Open Access Journals (Sweden)

    Antonia Krefeld-Schwalb

    Full Text Available The present study investigated how ease of imagery influences source monitoring accuracy. Two experiments were conducted in order to examine how ease of imagery influences the probability of source confusions of perceived and imagined completions of natural symmetric shapes. The stimuli consisted of binary pictures of natural objects, namely symmetric pictures of birds, butterflies, insects, and leaves. The ease of imagery (indicating the similarity of the sources and the discriminability (indicating the similarity of the items of each stimulus were estimated in a pretest and included as predictors of the memory performance for these stimuli. It was found that confusion of the sources becomes more likely when the imagery process was relatively easy. However, if the different processes of source monitoring-item memory, source memory and guessing biases-are disentangled, both experiments support the assumption that the effect of decreased source memory for easily imagined stimuli is due to decision processes and misinformation at retrieval rather than encoding processes and memory retention. The data were modeled with a Bayesian hierarchical implementation of the one high threshold source monitoring model.

  10. Spatial auditory processing in pinnipeds

    Science.gov (United States)

    Holt, Marla M.

    Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study

  11. Cognitive Interpretations of Ambiguous Visual Stimuli

    OpenAIRE

    Naber, Marnix

    2012-01-01

    Brains can sense and distinguish signals from background noise in physical environments, and recognize and classify them as distinct entities. Ambiguity is an inherent part of this process. It is a cognitive property that is generated by the noisy character of the signals, and by the design of the sensory systems that process them. Stimuli can be ambiguous if they are noisy, incomplete, or only briefly sensed. Such conditions may ...

  12. Cortical gating of oropharyngeal sensory stimuli.

    Science.gov (United States)

    Wheeler-Hegland, Karen; Pitts, Teresa; Davenport, Paul W

    2010-01-01

    Somatosensory evoked potentials provide a measure of cortical neuronal activation in response to various types of sensory stimuli. In order to prevent flooding of the cortex with redundant information various sensory stimuli are gated cortically such that response to stimulus 2 (S2) is significantly reduced in amplitude compared to stimulus 1 (S1). Upper airway protective mechanisms, such as swallowing and cough, are dependent on sensory input for triggering and modifying their motor output. Thus, it was hypothesized that central neural gating would be absent for paired-air puff stimuli applied to the oropharynx. Twenty-three healthy adults (18-35 years) served as research participants. Pharyngeal sensory evoked potentials (PSEPs) were measured via 32-electrode cap (10-20 system) connected to SynAmps(2) Neuroscan EEG System. Paired-pulse air puffs were delivered with an inter-stimulus interval of 500 ms to the oropharynx using a thin polyethylene tube connected to a flexible laryngoscope. Data were analyzed using descriptive statistics and a repeated measures analysis of variance. There were no significant differences found for the amplitudes S1 and S2 for any of the four component PSEP peaks. Mean gating ratios were above 0.90 for each peak. Results supports our hypothesis that sensory central neural gating would be absent for component PSEP peaks with paired-pulse stimuli delivered to the oropharynx. This may be related to the need for constant sensory monitoring necessary for adequate airway protection associated with swallowing and coughing. PMID:21423402

  13. Remindings influence the interpretation of ambiguous stimuli

    OpenAIRE

    Tullis, Jonathan G.; Braverman, Michael; Ross, Brian H; Benjamin, Aaron S.

    2014-01-01

    Remindings–stimulus-guided retrievals of prior events–may help us interpret ambiguous events by linking the current situation to relevant prior experiences. Evidence suggests that remindings play an important role in interpreting complex ambiguous stimuli (Ross & Bradshaw, 1994); here we evaluate whether remindings influence word interpretation and memory in a new paradigm. Learners studied words on distinct visual backgrounds and generated a sentence for each word. Homographs were either pre...

  14. Simulation of Stimuli-Responsive Polymer Networks

    Directory of Open Access Journals (Sweden)

    Thomas Gruhn

    2013-11-01

    Full Text Available The structure and material properties of polymer networks can depend sensitively on changes in the environment. There is a great deal of progress in the development of stimuli-responsive hydrogels for applications like sensors, self-repairing materials or actuators. Biocompatible, smart hydrogels can be used for applications, such as controlled drug delivery and release, or for artificial muscles. Numerical studies have been performed on different length scales and levels of details. Macroscopic theories that describe the network systems with the help of continuous fields are suited to study effects like the stimuli-induced deformation of hydrogels on large scales. In this article, we discuss various macroscopic approaches and describe, in more detail, our phase field model, which allows the calculation of the hydrogel dynamics with the help of a free energy that considers physical and chemical impacts. On a mesoscopic level, polymer systems can be modeled with the help of the self-consistent field theory, which includes the interactions, connectivity, and the entropy of the polymer chains, and does not depend on constitutive equations. We present our recent extension of the method that allows the study of the formation of nano domains in reversibly crosslinked block copolymer networks. Molecular simulations of polymer networks allow the investigation of the behavior of specific systems on a microscopic scale. As an example for microscopic modeling of stimuli sensitive polymer networks, we present our Monte Carlo simulations of a filament network system with crosslinkers.

  15. Anagrus breviphragma Soyka Short Distance Search Stimuli

    Science.gov (United States)

    Chiappini, Elisabetta; Berzolla, Alessia; Oppo, Annalisa

    2015-01-01

    Anagrus breviphragma Soyka (Hymenoptera: Mymaridae) successfully parasitises eggs of Cicadella viridis (L.) (Homoptera: Cicadellidae), embedded in vegetal tissues, suggesting the idea of possible chemical and physical cues, revealing the eggs presence. In this research, three treatments were considered in order to establish which types of cue are involved: eggs extracted from leaf, used as a control, eggs extracted from leaf and cleaned in water and ethanol, used to evaluate the presence of chemicals soluble in polar solvents, and eggs extracted from leaf and covered with Parafilm (M), used to avoid physical stimuli due to the bump on the leaf surface. The results show that eggs covered with Parafilm present a higher number of parasitised eggs and a lower probing starting time with respect to eggs washed with polar solvents or eggs extracted and untreated, both when the treatments were singly tested or when offered in sequence, independently of the treatment position. These results suggest that the exploited stimuli are not physical due to the bump but chemicals that can spread in the Parafilm, circulating the signal on the whole surface, and that the stimuli that elicit probing and oviposition are not subjected to learning. PMID:26543865

  16. Spatial Brightness Perception of Trichromatic Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Royer, Michael P.; Houser, Kevin W.

    2012-11-16

    An experiment was conducted to examine the effect of tuning optical radiation on brightness perception for younger (18-25 years of age) and older (50 years of age or older) observers. Participants made forced-choice evaluations of the brightness of a full factorial of stimulus pairs selected from two groups of four metameric stimuli. The large-field stimuli were created by systematically varying either the red or the blue primary of an RGB LED mixture. The results indicate that light stimuli of equal illuminance and chromaticity do not appear equally bright to either younger or older subjects. The rank-order of brightness is not predicted by any current model of human vision or theory of brightness perception including Scotopic to Photopic or Cirtopic to Photopic ratio theory, prime color theory, correlated color temperature, V(λ)-based photometry, color quality metrics, linear brightness models, or color appearance models. Age may affect brightness perception when short-wavelength primaries are used, especially those with a peak wavelength shorter than 450 nm. The results suggest further development of metrics to predict brightness perception is warranted, and that including age as a variable in predictive models may be valuable.

  17. Anagrus breviphragma Soyka Short Distance Search Stimuli

    Directory of Open Access Journals (Sweden)

    Elisabetta Chiappini

    2015-01-01

    Full Text Available Anagrus breviphragma Soyka (Hymenoptera: Mymaridae successfully parasitises eggs of Cicadella viridis (L. (Homoptera: Cicadellidae, embedded in vegetal tissues, suggesting the idea of possible chemical and physical cues, revealing the eggs presence. In this research, three treatments were considered in order to establish which types of cue are involved: eggs extracted from leaf, used as a control, eggs extracted from leaf and cleaned in water and ethanol, used to evaluate the presence of chemicals soluble in polar solvents, and eggs extracted from leaf and covered with Parafilm (M, used to avoid physical stimuli due to the bump on the leaf surface. The results show that eggs covered with Parafilm present a higher number of parasitised eggs and a lower probing starting time with respect to eggs washed with polar solvents or eggs extracted and untreated, both when the treatments were singly tested or when offered in sequence, independently of the treatment position. These results suggest that the exploited stimuli are not physical due to the bump but chemicals that can spread in the Parafilm, circulating the signal on the whole surface, and that the stimuli that elicit probing and oviposition are not subjected to learning.

  18. Functional Neurochemistry of the Auditory System

    Directory of Open Access Journals (Sweden)

    Nourollah Agha Ebrahimi

    1993-03-01

    Full Text Available Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope for future clinical studies

  19. Auditory Neuropathy/Dyssynchrony in Biotinidase Deficiency

    Science.gov (United States)

    Yaghini, Omid

    2016-01-01

    Biotinidase deficiency is a disorder inherited autosomal recessively showing evidence of hearing loss and optic atrophy in addition to seizures, hypotonia, and ataxia. In the present study, a 2-year-old boy with Biotinidase deficiency is presented in which clinical symptoms have been reported with auditory neuropathy/auditory dyssynchrony (AN/AD). In this case, transient-evoked otoacoustic emissions showed bilaterally normal responses representing normal function of outer hair cells. In contrast, acoustic reflex test showed absent reflexes bilaterally, and visual reinforcement audiometry and auditory brainstem responses indicated severe to profound hearing loss in both ears. These results suggest AN/AD in patients with Biotinidase deficiency. PMID:27144235

  20. Functional Neurochemistry of the Auditory System

    OpenAIRE

    Nourollah Agha Ebrahimi

    1993-01-01

    Functional Neurochemistry is one of the fields of studies in the auditory system which has had an outstanding development in the recent years. Many of the findings in the mentioned field had led not only the basic auditory researches but also the clinicians to new points of view in audiology.Here, we are aimed at discussing the latest investigations in the Functional Neurochemistry of the auditory system and have focused this review mainly on the researches which will arise flashes of hope f...