WorldWideScience

Sample records for auditory hierarchical stimuli

  1. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.

  2. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  3. Fragile X mice develop sensory hyperreactivity to auditory stimuli.

    Science.gov (United States)

    Chen, L; Toth, M

    2001-01-01

    Fragile X syndrome is the most prevalent cause of mental retardation. It is usually caused by the transcriptional inactivation of the FMR-1 gene. Although the cognitive defect is the most recognized symptom of fragile X syndrome, patients also show behavioral problems such as hyperarousal, hyperactivity, autism, aggression, anxiety and increased sensitivity to sensory stimuli. Here we investigated whether fragile X mice (fmr-1 gene knockout mice) exhibit abnormal sensitivity to sensory stimuli. First, hyperreactivity of fragile X mice to auditory stimulus was indicated in the prepulse inhibition paradigm. A moderately intense prepulse tone, that suppresses startle response to a strong auditory stimulus, elicited a significantly stronger effect in fragile X than in control mice. Second, sensory hyperreactivity of fragile X mice was demonstrated by a high seizure susceptibility to auditory stimulation. Selective induction of c-Fos, an early-immediate gene product, indicated that seizures involve auditory brainstem and thalamic nuclei. Audiogenic seizures were not due to a general increase in brain excitability because three different chemical convulsants (kainic acid, bicuculline and pentylenetetrazole) elicited similar effects in fragile X and wild-type mice. These data are consistent with the increased responsiveness of fragile X patients to auditory stimuli. The auditory hypersensitivity suggests an abnormal processing in the auditory system of fragile X mice, which could provide a useful model to study the molecular and cellular changes underlying fragile X syndrome.

  4. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  5. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    cochlear compression would be of great benefit, as a more precise diagnose of the deficits underlying a potential hearing impairment in both infants and adults could be obtained. It was demonstrated in this thesis, via experimental recordings and supported by model simulations, that the growth of the ASSR....... Sensorineural hearing impairments is commonly associated with a loss of outer hair-cell functionality, and a measurable consequence is the decreased amount of cochlear compression at frequencies corresponding to the damaged locations in the cochlea. In clinical diagnostics, a fast and objective measure of local...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...

  6. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Effect of Size Change and Brightness Change of Visual Stimuli on Loudness Perception and Pitch Perception of Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Syouya Tanabe

    2011-10-01

    Full Text Available People obtain a lot of information from visual and auditory sensation on daily life. Regarding the effect of visual stimuli on perception of auditory stimuli, studies of phonological perception and sound localization have been made in great numbers. This study examined the effect of visual stimuli on perception in loudness and pitch of auditory stimuli. We used the image of figures whose size or brightness was changed as visual stimuli, and the sound of pure tone whose loudness or pitch was changed as auditory stimuli. Those visual and auditory stimuli were combined independently to make four types of audio-visual multisensory stimuli for psychophysical experiments. In the experiments, participants judged change in loudness or pitch of auditory stimuli, while they judged the direction of size change or the kind of a presented figure in visual stimuli. Therefore they cannot neglect visual stimuli while they judged auditory stimuli. As a result, perception in loudness and pitch were promoted significantly around their difference limen, when the image was getting bigger or brighter, compared with the case in which the image had no changes. This indicates that perception in loudness and pitch were affected by change in size and brightness of visual stimuli.

  8. Startle auditory stimuli enhance the performance of fast dynamic contractions.

    Science.gov (United States)

    Fernandez-Del-Olmo, Miguel; Río-Rodríguez, Dan; Iglesias-Soler, Eliseo; Acero, Rafael M

    2014-01-01

    Fast reaction times and the ability to develop a high rate of force development (RFD) are crucial for sports performance. However, little is known regarding the relationship between these parameters. The aim of this study was to investigate the effects of auditory stimuli of different intensities on the performance of a concentric bench-press exercise. Concentric bench-presses were performed by thirteen trained subjects in response to three different conditions: a visual stimulus (VS); a visual stimulus accompanied by a non-startle auditory stimulus (AS); and a visual stimulus accompanied by a startle auditory stimulus (SS). Peak RFD, peak velocity, onset movement, movement duration and electromyography from pectoralis and tricep muscles were recorded. The SS condition induced an increase in the RFD and peak velocity and a reduction in the movement onset and duration, in comparison with the VS and AS condition. The onset activation of the pectoralis and tricep muscles was shorter for the SS than for the VS and AS conditions. These findings point out to specific enhancement effects of loud auditory stimulation on the rate of force development. This is of relevance since startle stimuli could be used to explore neural adaptations to resistance training.

  9. Influence of affective auditory stimuli on balance control during static stance.

    Science.gov (United States)

    Chen, Xingyu; Qu, Xingda

    2017-03-01

    The main purpose of this study was to examine the effects of affective auditory stimuli on balance control during static stance. Twelve female and 12 male participants were recruited. Each participant completed four upright standing trials including three auditory stimuli trials and one baseline trial (ie no auditory stimuli). The three auditory stimuli trials corresponded to the pleasant, neutral and unpleasant sound conditions. Center of pressure (COP) measures were used to quantify balance control performance. It was found that unpleasant auditory stimuli were associated with larger COP amplitude in the AP direction compared to the rest testing conditions. There were no significant interaction effects between 'auditory stimuli' and gender. These findings suggested that some specificities presented by auditory stimuli are important for balance control, and the effects of auditory stimuli on balance control were dependent on their affective components. Practitioner Summary: Findings from this study can aid in better understanding of the relationship between auditory stimuli and balance control. In particular, unpleasant auditory stimuli were found to result in poorer balance control and higher fall risks. Therefore, to prevent fall accidents, interventions should be developed to reduce exposures to unpleasant sound.

  10. P3a from auditory white noise stimuli.

    Science.gov (United States)

    Combs, Lindsey A; Polich, John

    2006-05-01

    P3a and P3b event-related brain potentials (ERPs) were elicited with an auditory 3-stimulus (target, distracter, standard) paradigm in which subjects responded only to the target. Distracter stimuli consisted of white noise, novel sounds, or a high frequency tone, with stimulus characteristics perceptually controlled. Task difficulty was varied as easy and hard by changing the pitch difference between the target and standard stimuli. Error rate was greater and response time longer for the hard task. P3a distracter amplitude was largest for the white noise and novel stimuli, with maximum amplitude over the central recording sites, and larger for the hard discrimination task. P3b target amplitude was unaffected by distracter type, maximum over the parietal recording sites, and smaller and later for the hard task. The findings indicate that white noise stimuli can produce reliable P3a components. White noise can be useful for clinical P3a applications, as it removes the variability of stimulus novelty.

  11. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  12. Construction of Hindi Speech Stimuli for Eliciting Auditory Brainstem Responses.

    Science.gov (United States)

    Ansari, Mohammad Shamim; Rangasayee, R

    2016-12-01

    Speech-evoked auditory brainstem responses (spABRs) provide considerable information of clinical relevance to describe auditory processing of complex stimuli at the sub cortical level. The substantial research data have suggested faithful representation of temporal and spectral characteristics of speech sounds. However, the spABR are known to be affected by acoustic properties of speech, language experiences and training. Hence, there exists indecisive literature with regards to brainstem speech processing. This warrants establishment of language specific speech stimulus to describe the brainstem processing in specific oral language user. The objective of current study is to develop Hindi speech stimuli for recording auditory brainstem responses. The Hindi stop speech of 40 ms containing five formants was constructed. Brainstem evoked responses to speech sound |da| were gained from 25 normal hearing (NH) adults having mean age of 20.9 years (SD = 2.7) in the age range of 18-25 years and ten subjects (HI) with mild SNHL of mean 21.3 years (SD = 3.2) in the age range of 18-25 years. The statistically significant differences in the mean identification scores of synthesized for speech stimuli |da| and |ga| between NH and HI were obtained. The mean, median, standard deviation, minimum, maximum and 95 % confidence interval for the discrete peaks and V-A complex values of electrophysiological responses to speech stimulus were measured and compared between NH and HI population. This paper delineates a comprehensive methodological approach for development of Hindi speech stimuli and recording of ABR to speech. The acoustic characteristic of stimulus |da| was faithfully represented at brainstem level in normal hearing adults. There was statistically significance difference between NH and HI individuals. This suggests that spABR offers an opportunity to segregate normal speech encoding from abnormal speech processing at sub cortical level, which implies that

  13. Auditory Preferences of Young Children with and without Hearing Loss for Meaningful Auditory-Visual Compound Stimuli

    Science.gov (United States)

    Zupan, Barbra; Sussman, Joan E.

    2009-01-01

    Experiment 1 examined modality preferences in children and adults with normal hearing to combined auditory-visual stimuli. Experiment 2 compared modality preferences in children using cochlear implants participating in an auditory emphasized therapy approach to the children with normal hearing from Experiment 1. A second objective in both…

  14. Modification of sudden onset auditory ERP by involuntary attention to visual stimuli.

    Science.gov (United States)

    Oray, Serkan; Lu, Zhong-Lin; Dawson, Michael E

    2002-03-01

    To investigate the cross-modal nature of the exogenous attention system, we studied how involuntary attention in the visual modality affects ERPs elicited by sudden onset of events in the auditory modality. Relatively loud auditory white noise bursts were presented to subjects with random and long inter-trial intervals. The noise bursts were either presented alone, or paired with a visual stimulus with a visual to auditory onset asynchrony of 120 ms. In a third condition, the visual stimuli were shown alone. All three conditions, auditory alone, visual alone, and paired visual/auditory, were randomly inter-mixed and presented with equal probabilities. Subjects were instructed to fixate on a point in front of them without task instructions concerning either the auditory or visual stimuli. ERPs were recorded from 28 scalp sites throughout every experimental session. Compared to ERPs in the auditory alone condition, pairing the auditory noise bursts with the visual stimulus reduced the amplitude of the auditory N100 component at Cz by 40% and the auditory P200/P300 component at Cz by 25%. No significant topographical change was observed in the scalp distributions of the N100 and P200/P300. Our results suggest that involuntary attention to visual stimuli suppresses early sensory (N100) as well as late cognitive (P200/P300) processing of sudden auditory events. The activation of the exogenous attention system by sudden auditory onset can be modified by involuntary visual attention in a cross-model, passive prepulse inhibition paradigm.

  15. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    Science.gov (United States)

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  16. Modulation of Auditory Responses to Speech vs. Nonspeech Stimuli during Speech Movement Planning.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2016-01-01

    Previously, we showed that the N100 amplitude in long latency auditory evoked potentials (LLAEPs) elicited by pure tone probe stimuli is modulated when the stimuli are delivered during speech movement planning as compared with no-speaking control conditions. Given that we probed the auditory system only with pure tones, it remained unknown whether the nature and magnitude of this pre-speech auditory modulation depends on the type of auditory stimulus. Thus, here, we asked whether the effect of speech movement planning on auditory processing varies depending on the type of auditory stimulus. In an experiment with nine adult subjects, we recorded LLAEPs that were elicited by either pure tones or speech syllables when these stimuli were presented prior to speech onset in a delayed-response speaking condition vs. a silent reading control condition. Results showed no statistically significant difference in pre-speech modulation of the N100 amplitude (early stages of auditory processing) for the speech stimuli as compared with the nonspeech stimuli. However, the amplitude of the P200 component (later stages of auditory processing) showed a statistically significant pre-speech modulation that was specific to the speech stimuli only. Hence, the overall results from this study indicate that, immediately prior to speech onset, modulation of the auditory system has a general effect on early processing stages but a speech-specific effect on later processing stages. This finding is consistent with the hypothesis that pre-speech auditory modulation may play a role in priming the auditory system for its role in monitoring auditory feedback during speech production.

  17. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  19. Categorization of Extremely Brief Auditory Stimuli: Domain-Specific or Domain-General Processes?

    Science.gov (United States)

    Bigand, Emmanuel; Delbé, Charles; Gérard, Yannick; Tillmann, Barbara

    2011-01-01

    The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes. PMID:22046436

  20. Increased visual task difficulty enhances attentional capture by both visual and auditory distractor stimuli.

    Science.gov (United States)

    Sugimoto, Fumie; Katayama, Jun'ichi

    2017-06-01

    Previous studies using a three-stimulus oddball task have shown the amplitude of P3a elicited by distractor stimuli increases when perceptual discrimination between standard and target stimuli becomes difficult. This means that the attentional capture by the distractor stimuli is enhanced along with an increase in task difficulty. So far, the increase of P3a has been reported when standard, target, and distractor stimuli were presented within one sensory modality (i.e., visual or auditory). In the present study, we further investigated whether or not the increase of P3a can also be observed when the distractor stimuli are presented in a different modality from the standard and target stimuli. Twelve participants performed a three-stimulus oddball task in which they were required to discriminate between visual standard and target stimuli. As the distractor stimuli, either another visual stimulus or an auditory stimulus was presented in separate blocks. Visual distractor stimuli elicited P3a, and its amplitude increased when visual standard/target discrimination was difficult, replicating previous findings. Auditory distractor stimuli elicited P3a, and importantly, its amplitude also increased when visual standard/target discrimination was difficult. This result means that attentional capture by distractor stimuli can be enhanced even when the distractor stimuli are presented in a different modality from the standard and target stimuli. Possible mechanisms and implications are discussed in terms of the relative saliency of distractor stimuli, influences of temporal/spatial attention, and the load involved in a task. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Auditory stimuli mimicking ambient sounds drive temporal "delta-brushes" in premature infants.

    Directory of Open Access Journals (Sweden)

    Mathilde Chipaux

    Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex

  2. Natural stimuli improve auditory BCIs with respect to ergonomics and performance

    Science.gov (United States)

    Höhne, Johannes; Krenzlin, Konrad; Dähne, Sven; Tangermann, Michael

    2012-08-01

    Moving from well-controlled, brisk artificial stimuli to natural and less-controlled stimuli seems counter-intuitive for event-related potential (ERP) studies. As natural stimuli typically contain a richer internal structure, they might introduce higher levels of variance and jitter in the ERP responses. Both characteristics are unfavorable for a good single-trial classification of ERPs in the context of a multi-class brain-computer interface (BCI) system, where the class-discriminant information between target stimuli and non-target stimuli must be maximized. For the application in an auditory BCI system, however, the transition from simple artificial tones to natural syllables can be useful despite the variance introduced. In the presented study, healthy users (N = 9) participated in an offline auditory nine-class BCI experiment with artificial and natural stimuli. It is shown that the use of syllables as natural stimuli does not only improve the users’ ergonomic ratings; also the classification performance is increased. Moreover, natural stimuli obtain a better balance in multi-class decisions, such that the number of systematic confusions between the nine classes is reduced. Hopefully, our findings may contribute to make auditory BCI paradigms more user friendly and applicable for patients.

  3. Increased Evoked Potentials to Arousing Auditory Stimuli during Sleep: Implication for the Understanding of Dream Recall

    OpenAIRE

    Vallat, Raphael; Lajnef, Tarek; Eichenlaub, Jean-Baptiste; Berthomier, Christian; Jerbi, Karim; Morlet, Dominique; Ruby, Perrine M.

    2017-01-01

    High dream recallers (HR) show a larger brain reactivity to auditory stimuli during wakefulness and sleep as compared to low dream recallers (LR) and also more intra-sleep wakefulness (ISW), but no other modification of the sleep macrostructure. To further understand the possible causal link between brain responses, ISW and dream recall, we investigated the sleep microstructure of HR and LR, and tested whether the amplitude of auditory evoked potentials (AEPs) was predictive of arousing react...

  4. Gait variability is altered in older adults when listening to auditory stimuli with differing temporal structures.

    Science.gov (United States)

    Kaipust, Jeffrey P; McGrath, Denise; Mukherjee, Mukul; Stergiou, Nicholas

    2013-08-01

    Gait variability in the context of a deterministic dynamical system may be quantified using nonlinear time series analyses that characterize the complexity of the system. Pathological gait exhibits altered gait variability. It can be either too periodic and predictable, or too random and disordered, as is the case with aging. While gait therapies often focus on restoration of linear measures such as gait speed or stride length, we propose that the goal of gait therapy should be to restore optimal gait variability, which exhibits chaotic fluctuations and is the balance between predictability and complexity. In this context, our purpose was to investigate how listening to different auditory stimuli affects gait variability. Twenty-seven young and 27 elderly subjects walked on a treadmill for 5 min while listening to white noise, a chaotic rhythm, a metronome, and with no auditory stimulus. Stride length, step width, and stride intervals were calculated for all conditions. Detrended Fluctuation Analysis was then performed on these time series. A quadratic trend analysis determined that an idealized inverted-U shape described the relationship between gait variability and the structure of the auditory stimuli for the elderly group, but not for the young group. This proof-of-concept study shows that the gait of older adults may be manipulated using auditory stimuli. Future work will investigate which structures of auditory stimuli lead to improvements in functional status in older adults.

  5. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    Directory of Open Access Journals (Sweden)

    Mikkel Wallentin

    2016-01-01

    Full Text Available Klinefelter syndrome (47, XXY (KS is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49 responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors. One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  6. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  7. Suppressed visual looming stimuli are not integrated with auditory looming signals: Evidence from continuous flash suppression.

    Science.gov (United States)

    Moors, Pieter; Huygelier, Hanne; Wagemans, Johan; de-Wit, Lee; van Ee, Raymond

    2015-01-01

    Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS) have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  8. Long-latency auditory evoked potentials with verbal and nonverbal stimuli,

    Directory of Open Access Journals (Sweden)

    Sheila Jacques Oppitz

    2015-12-01

    Full Text Available ABSTRACT INTRODUCTION: Long-latency auditory evoked potentials represent the cortical activity related to attention, memory, and auditory discrimination skills. Acoustic signal processing occurs differently between verbal and nonverbal stimuli, influencing the latency and amplitude patterns. OBJECTIVE: To describe the latencies of the cortical potentials P1, N1, P2, N2, and P3, as well as P3 amplitude, with different speech stimuli and tone bursts, and to classify them in the presence and absence of these data. METHODS: A total of 30 subjects with normal hearing were assessed, aged 18-32 years old, matched by gender. Nonverbal stimuli were used (tone burst; 1000 Hz - frequent and 4000 Hz - rare; and verbal (/ba/ - frequent; /ga/, /da/, and /di/ - rare. RESULTS: Considering the component N2 for tone burst, the lowest latency found was 217.45 ms for the BA/DI stimulus; the highest latency found was 256.5 ms. For the P3 component, the shortest latency with tone burst stimuli was 298.7 with BA/GA stimuli, the highest, was 340 ms. For the P3 amplitude, there was no statistically significant difference among the different stimuli. For latencies of components P1, N1, P2, N2, P3, there were no statistical differences among them, regardless of the stimuli used. CONCLUSION: There was a difference in the latency of potentials N2 and P3 among the stimuli employed but no difference was observed for the P3 amplitude.

  9. Sensory Symptoms and Processing of Nonverbal Auditory and Visual Stimuli in Children with Autism Spectrum Disorder

    Science.gov (United States)

    Stewart, Claire R.; Sanchez, Sandra S.; Grenesko, Emily L.; Brown, Christine M.; Chen, Colleen P.; Keehn, Brandon; Velasquez, Francisco; Lincoln, Alan J.; Müller, Ralph-Axel

    2016-01-01

    Atypical sensory responses are common in autism spectrum disorder (ASD). While evidence suggests impaired auditory-visual integration for verbal information, findings for nonverbal stimuli are inconsistent. We tested for sensory symptoms in children with ASD (using the Adolescent/Adult Sensory Profile) and examined unisensory and bisensory…

  10. Data Collection and Analysis Techniques for Evaluating the Perceptual Qualities of Auditory Stimuli

    Energy Technology Data Exchange (ETDEWEB)

    Bonebright, T.L.; Caudell, T.P.; Goldsmith, T.E.; Miner, N.E.

    1998-11-17

    This paper describes a general methodological framework for evaluating the perceptual properties of auditory stimuli. The framework provides analysis techniques that can ensure the effective use of sound for a variety of applications including virtual reality and data sonification systems. Specifically, we discuss data collection techniques for the perceptual qualities of single auditory stimuli including identification tasks, context-based ratings, and attribute ratings. In addition, we present methods for comparing auditory stimuli, such as discrimination tasks, similarity ratings, and sorting tasks. Finally, we discuss statistical techniques that focus on the perceptual relations among stimuli, such as Multidimensional Scaling (MDS) and Pathfinder Analysis. These methods are presented as a starting point for an organized and systematic approach for non-experts in perceptual experimental methods, rather than as a complete manual for performing the statistical techniques and data collection methods. It is our hope that this paper will help foster further interdisciplinary collaboration among perceptual researchers, designers, engineers, and others in the development of effective auditory displays.

  11. Association of Concurrent fNIRS and EEG Signatures in Response to Auditory and Visual Stimuli.

    Science.gov (United States)

    Chen, Ling-Chia; Sandmann, Pascale; Thorne, Jeremy D; Herrmann, Christoph S; Debener, Stefan

    2015-09-01

    Functional near-infrared spectroscopy (fNIRS) has been proven reliable for investigation of low-level visual processing in both infants and adults. Similar investigation of fundamental auditory processes with fNIRS, however, remains only partially complete. Here we employed a systematic three-level validation approach to investigate whether fNIRS could capture fundamental aspects of bottom-up acoustic processing. We performed a simultaneous fNIRS-EEG experiment with visual and auditory stimulation in 24 participants, which allowed the relationship between changes in neural activity and hemoglobin concentrations to be studied. In the first level, the fNIRS results showed a clear distinction between visual and auditory sensory modalities. Specifically, the results demonstrated area specificity, that is, maximal fNIRS responses in visual and auditory areas for the visual and auditory stimuli respectively, and stimulus selectivity, whereby the visual and auditory areas responded mainly toward their respective stimuli. In the second level, a stimulus-dependent modulation of the fNIRS signal was observed in the visual area, as well as a loudness modulation in the auditory area. Finally in the last level, we observed significant correlations between simultaneously-recorded visual evoked potentials and deoxygenated hemoglobin (DeoxyHb) concentration, and between late auditory evoked potentials and oxygenated hemoglobin (OxyHb) concentration. In sum, these results suggest good sensitivity of fNIRS to low-level sensory processing in both the visual and the auditory domain, and provide further evidence of the neurovascular coupling between hemoglobin concentration changes and non-invasive brain electrical activity.

  12. Is silence golden? Effects of auditory stimuli and their absence on adult hippocampal neurogenesis.

    Science.gov (United States)

    Kirste, Imke; Nicola, Zeina; Kronenberg, Golo; Walker, Tara L; Liu, Robert C; Kempermann, Gerd

    2015-03-01

    We have previously hypothesized that the reason why physical activity increases precursor cell proliferation in adult neurogenesis is that movement serves as non-specific signal to evoke the alertness required to meet cognitive demands. Thereby a pool of immature neurons is generated that are potentially recruitable by subsequent cognitive stimuli. Along these lines, we here tested whether auditory stimuli might exert a similar non-specific effect on adult neurogenesis in mice. We used the standard noise level in the animal facility as baseline and compared this condition to white noise, pup calls, and silence. In addition, as patterned auditory stimulus without ethological relevance to mice we used piano music by Mozart (KV 448). All stimuli were transposed to the frequency range of C57BL/6 and hearing was objectified with acoustic evoked potentials. We found that except for white noise all stimuli, including silence, increased precursor cell proliferation (assessed 24 h after labeling with bromodeoxyuridine, BrdU). This could be explained by significant increases in BrdU-labeled Sox2-positive cells (type-1/2a). But after 7 days, only silence remained associated with increased numbers of BrdU-labeled cells. Compared to controls at this stage, exposure to silence had generated significantly increased numbers of BrdU/NeuN-labeled neurons. Our results indicate that the unnatural absence of auditory input as well as spectrotemporally rich albeit ethological irrelevant stimuli activate precursor cells-in the case of silence also leading to greater numbers of newborn immature neurons-whereas ambient and unstructured background auditory stimuli do not.

  13. The effect of semantic congruence for visual-auditory bimodal stimuli.

    Science.gov (United States)

    Xingwei An; Yong Cao; Jinwen Wei; Shuang Liu; Xuejun Jiao; Dong Ming

    2017-07-01

    It is commonly believed that brain has faster reaction speed and higher reaction accuracy on visual-auditory bimodal stimuli than single modal stimuli in current neuropsychological researches, while visual-auditory bimodal stimuli (VABS) do not show corresponding superiority in BCI system. This paper aims at investigating whether semantically congruent stimuli could also get better performance than semantically incongruent stimuli in Brain Computer Interface (BCI) system. Two VABS based paradigms (semantically congruent or incongruent) were conducted in this study. 10 healthy subjects participated in the experiment in order to compare the two paradigms. The results indicated that the higher Event-related potential (ERP) amplitude of semantic incongruent paradigm were observed both in target and non-target stimuli. Nevertheless, we didn't observe significant difference of classification accuracy between congruent and incongruent conditions. Most participants showed their preference on semantically congruent condition for less workload needed. This finding demonstrated that semantic congruency has positive effect on behavioral results (less workload) and insignificant effect on system efficiency.

  14. Nonword repetition in adults who stutter: The effects of stimuli stress and auditory-orthographic cues.

    Directory of Open Access Journals (Sweden)

    Geoffrey A Coalson

    Full Text Available Adults who stutter (AWS are less accurate in their immediate repetition of novel phonological sequences compared to adults who do not stutter (AWNS. The present study examined whether manipulation of the following two aspects of traditional nonword repetition tasks unmask distinct weaknesses in phonological working memory in AWS: (1 presentation of stimuli with less-frequent stress patterns, and (2 removal of auditory-orthographic cues immediately prior to response.Fifty-two participants (26 AWS, 26 AWNS produced 12 bisyllabic nonwords in the presence of corresponding auditory-orthographic cues (i.e., immediate repetition task, and the absence of auditory-orthographic cues (i.e., short-term recall task. Half of each cohort (13 AWS, 13 AWNS were exposed to the stimuli with high-frequency trochaic stress, and half (13 AWS, 13 AWNS were exposed to identical stimuli with lower-frequency iambic stress.No differences in immediate repetition accuracy for trochaic or iambic nonwords were observed for either group. However, AWS were less accurate when recalling iambic nonwords than trochaic nonwords in the absence of auditory-orthographic cues.Manipulation of two factors which may minimize phonological demand during standard nonword repetition tasks increased the number of errors in AWS compared to AWNS. These findings suggest greater vulnerability in phonological working memory in AWS, even when producing nonwords as short as two syllables.

  15. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  16. Auditory Evoked Potentials with Different Speech Stimuli: a Comparison and Standardization of Values

    Directory of Open Access Journals (Sweden)

    Didoné, Dayane Domeneghini

    2016-02-01

    Full Text Available Introduction Long Latency Auditory Evoked Potentials (LLAEP with speech sounds has been the subject of research, as these stimuli would be ideal to check individualś detection and discrimination. Objective The objective of this study is to compare and describe the values of latency and amplitude of cortical potentials for speech stimuli in adults with normal hearing. Methods The sample population included 30 normal hearing individuals aged between 18 and 32 years old with ontological disease and auditory processing. All participants underwent LLAEP search using pairs of speech stimuli (/ba/ x /ga/, /ba/ x /da/, and /ba/ x /di/. The authors studied the LLAEP using binaural stimuli at an intensity of 75dBNPS. In total, they used 300 stimuli were used (∼60 rare and 240 frequent to obtain the LLAEP. Individuals received guidance to count the rare stimuli. The authors analyzed latencies of potential P1, N1, P2, N2, and P300, as well as the ampleness of P300. Results The mean age of the group was approximately 23 years. The averages of cortical potentials vary according to different speech stimuli. The N2 latency was greater for /ba/ x /di/ and P300 latency was greater for /ba/ x /ga/. Considering the overall average amplitude, it ranged from 5.35 and 7.35uV for different speech stimuli. Conclusion It was possible to obtain the values of latency and amplitude for different speech stimuli. Furthermore, the N2 component showed higher latency with the / ba / x / di / stimulus and P300 for /ba/ x / ga /.

  17. Combined auditory and visual stimuli facilitate head saccades in the barn owl (Tyto alba).

    Science.gov (United States)

    Whitchurch, Elizabeth A; Takahashi, Terry T

    2006-08-01

    The barn owl naturally responds to an auditory or visual stimulus in its environment with a quick head turn toward the source. We measured these head saccades evoked by auditory, visual, and simultaneous, co-localized audiovisual stimuli to quantify multisensory interactions in the barn owl. Stimulus levels ranged from near to well above saccadic threshold. In accordance with previous human psychophysical findings, the owl's saccade reaction times (SRTs) and errors to unisensory stimuli were inversely related to stimulus strength. Auditory saccades characteristically had shorter reaction times but were less accurate than visual saccades. Audiovisual trials, over a large range of tested stimulus combinations, had auditory-like SRTs and visual-like errors, suggesting that barn owls are able to use both auditory and visual cues to produce saccades with the shortest possible SRT and greatest accuracy. These results support a model of sensory integration in which the faster modality initiates the saccade and the slower modality remains available to refine saccade trajectory.

  18. Increased Evoked Potentials to Arousing Auditory Stimuli during Sleep: Implication for the Understanding of Dream Recall.

    Science.gov (United States)

    Vallat, Raphael; Lajnef, Tarek; Eichenlaub, Jean-Baptiste; Berthomier, Christian; Jerbi, Karim; Morlet, Dominique; Ruby, Perrine M

    2017-01-01

    High dream recallers (HR) show a larger brain reactivity to auditory stimuli during wakefulness and sleep as compared to low dream recallers (LR) and also more intra-sleep wakefulness (ISW), but no other modification of the sleep macrostructure. To further understand the possible causal link between brain responses, ISW and dream recall, we investigated the sleep microstructure of HR and LR, and tested whether the amplitude of auditory evoked potentials (AEPs) was predictive of arousing reactions during sleep. Participants (18 HR, 18 LR) were presented with sounds during a whole night of sleep in the lab and polysomnographic data were recorded. Sleep microstructure (arousals, rapid eye movements (REMs), muscle twitches (MTs), spindles, KCs) was assessed using visual, semi-automatic and automatic validated methods. AEPs to arousing (awakenings or arousals) and non-arousing stimuli were subsequently computed. No between-group difference in the microstructure of sleep was found. In N2 sleep, auditory arousing stimuli elicited a larger parieto-occipital positivity and an increased late frontal negativity as compared to non-arousing stimuli. As compared to LR, HR showed more arousing stimuli and more long awakenings, regardless of the sleep stage but did not show more numerous or longer arousals. These results suggest that the amplitude of the brain response to stimuli during sleep determine subsequent awakening and that awakening duration (and not arousal) is the critical parameter for dream recall. Notably, our results led us to propose that the minimum necessary duration of an awakening during sleep for a successful encoding of dreams into long-term memory is approximately 2 min.

  19. Influence of auditory and audiovisual stimuli on the right-left prevalence effect

    DEFF Research Database (Denmark)

    Vu, Kim-Phuong L; Minakata, Katsumi; Ngo, Mary Kim

    2014-01-01

    not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension...... was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension...

  20. Auditory Stimuli Coding by Postsynaptic Potential and Local Field Potential Features.

    Science.gov (United States)

    de Assis, Juliana M; Santos, Mikaelle O; de Assis, Francisco M

    2016-01-01

    The relation between physical stimuli and neurophysiological responses, such as action potentials (spikes) and Local Field Potentials (LFP), has recently been experimented in order to explain how neurons encode auditory information. However, none of these experiments presented analyses with postsynaptic potentials (PSPs). In the present study, we have estimated information values between auditory stimuli and amplitudes/latencies of PSPs and LFPs in anesthetized rats in vivo. To obtain these values, a new method of information estimation was used. This method produced more accurate estimates than those obtained by using the traditional binning method; a fact that was corroborated by simulated data. The traditional binning method could not certainly impart such accuracy even when adjusted by quadratic extrapolation. We found that the information obtained from LFP amplitude variation was significantly greater than the information obtained from PSP amplitude variation. This confirms the fact that LFP reflects the action of many PSPs. Results have shown that the auditory cortex codes more information of stimuli frequency with slow oscillations in groups of neurons than it does with slow oscillations in neurons separately.

  1. Auditory Stimuli Coding by Postsynaptic Potential and Local Field Potential Features.

    Directory of Open Access Journals (Sweden)

    Juliana M de Assis

    Full Text Available The relation between physical stimuli and neurophysiological responses, such as action potentials (spikes and Local Field Potentials (LFP, has recently been experimented in order to explain how neurons encode auditory information. However, none of these experiments presented analyses with postsynaptic potentials (PSPs. In the present study, we have estimated information values between auditory stimuli and amplitudes/latencies of PSPs and LFPs in anesthetized rats in vivo. To obtain these values, a new method of information estimation was used. This method produced more accurate estimates than those obtained by using the traditional binning method; a fact that was corroborated by simulated data. The traditional binning method could not certainly impart such accuracy even when adjusted by quadratic extrapolation. We found that the information obtained from LFP amplitude variation was significantly greater than the information obtained from PSP amplitude variation. This confirms the fact that LFP reflects the action of many PSPs. Results have shown that the auditory cortex codes more information of stimuli frequency with slow oscillations in groups of neurons than it does with slow oscillations in neurons separately.

  2. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  3. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli.

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users-for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare 'oddball' stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  4. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders

    2016-01-01

    relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS=49; Controls=49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often...... with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain’s motor network with no difference between groups...... as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased...

  5. Suppressed Visual Looming Stimuli are Not Integrated with Auditory Looming Signals: Evidence from Continuous Fash Suppression

    Directory of Open Access Journals (Sweden)

    Pieter Moors

    2015-02-01

    Full Text Available Previous studies using binocular rivalry have shown that signals in a modality other than the visual can bias dominance durations depending on their congruency with the rivaling stimuli. More recently, studies using continuous flash suppression (CFS have reported that multisensory integration influences how long visual stimuli remain suppressed. In this study, using CFS, we examined whether the contrast thresholds for detecting visual looming stimuli are influenced by a congruent auditory stimulus. In Experiment 1, we show that a looming visual stimulus can result in lower detection thresholds compared to a static concentric grating, but that auditory tone pips congruent with the looming stimulus did not lower suppression thresholds any further. In Experiments 2, 3, and 4, we again observed no advantage for congruent multisensory stimuli. These results add to our understanding of the conditions under which multisensory integration is possible, and suggest that certain forms of multisensory integration are not evident when the visual stimulus is suppressed from awareness using CFS.

  6. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  7. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  8. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  9. Audiovisual Integration Delayed by Stimulus Onset Asynchrony Between Auditory and Visual Stimuli in Older Adults.

    Science.gov (United States)

    Ren, Yanna; Yang, Weiping; Nakahashi, Kohei; Takahashi, Satoshi; Wu, Jinglong

    2017-02-01

    Although neuronal studies have shown that audiovisual integration is regulated by temporal factors, there is still little knowledge about the impact of temporal factors on audiovisual integration in older adults. To clarify how stimulus onset asynchrony (SOA) between auditory and visual stimuli modulates age-related audiovisual integration, 20 younger adults (21-24 years) and 20 older adults (61-80 years) were instructed to perform an auditory or visual stimuli discrimination experiment. The results showed that in younger adults, audiovisual integration was altered from an enhancement (AV, A ± 50 V) to a depression (A ± 150 V). In older adults, the alterative pattern was similar to that for younger adults with the expansion of SOA; however, older adults showed significantly delayed onset for the time-window-of-integration and peak latency in all conditions, which further demonstrated that audiovisual integration was delayed more severely with the expansion of SOA, especially in the peak latency for V-preceded-A conditions in older adults. Our study suggested that audiovisual facilitative integration occurs only within a certain SOA range (e.g., -50 to 50 ms) in both younger and older adults. Moreover, our results confirm that the response for older adults was slowed and provided empirical evidence that integration ability is much more sensitive to the temporal alignment of audiovisual stimuli in older adults.

  10. Affective Stimuli for an Auditory P300 Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Akinari Onishi

    2017-09-01

    Full Text Available Gaze-independent brain computer interfaces (BCIs are a potential communication tool for persons with paralysis. This study applies affective auditory stimuli to investigate their effects using a P300 BCI. Fifteen able-bodied participants operated the P300 BCI, with positive and negative affective sounds (PA: a meowing cat sound, NA: a screaming cat sound. Permuted stimuli of the positive and negative affective sounds (permuted-PA, permuted-NA were also used for comparison. Electroencephalography data was collected, and offline classification accuracies were compared. We used a visual analog scale (VAS to measure positive and negative affective feelings in the participants. The mean classification accuracies were 84.7% for PA and 67.3% for permuted-PA, while the VAS scores were 58.5 for PA and −12.1 for permuted-PA. The positive affective stimulus showed significantly higher accuracy and VAS scores than the negative affective stimulus. In contrast, mean classification accuracies were 77.3% for NA and 76.0% for permuted-NA, while the VAS scores were −50.0 for NA and −39.2 for permuted NA, which are not significantly different. We determined that a positive affective stimulus with accompanying positive affective feelings significantly improved BCI accuracy. Additionally, an ALS patient achieved 90% online classification accuracy. These results suggest that affective stimuli may be useful for preparing a practical auditory BCI system for patients with disabilities.

  11. Spatial and Temporal High Processing of Visual and Auditory Stimuli in Cervical Dystonia.

    Science.gov (United States)

    Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo

    2017-01-01

    Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.

  12. Learning of arbitrary association between visual and auditory novel stimuli in adults: the "bond effect" of haptic exploration.

    Directory of Open Access Journals (Sweden)

    Benjamin Fredembach

    Full Text Available BACKGROUND: It is well-known that human beings are able to associate stimuli (novel or not perceived in their environment. For example, this ability is used by children in reading acquisition when arbitrary associations between visual and auditory stimuli must be learned. The studies tend to consider it as an "implicit" process triggered by the learning of letter/sound correspondences. The study described in this paper examined whether the addition of the visuo-haptic exploration would help adults to learn more effectively the arbitrary association between visual and auditory novel stimuli. METHODOLOGY/PRINCIPAL FINDINGS: Adults were asked to learn 15 new arbitrary associations between visual stimuli and their corresponding sounds using two learning methods which differed according to the perceptual modalities involved in the exploration of the visual stimuli. Adults used their visual modality in the "classic" learning method and both their visual and haptic modalities in the "multisensory" learning one. After both learning methods, participants showed a similar above-chance ability to recognize the visual and auditory stimuli and the audio-visual associations. However, the ability to recognize the visual-auditory associations was better after the multisensory method than after the classic one. CONCLUSION/SIGNIFICANCE: This study revealed that adults learned more efficiently the arbitrary association between visual and auditory novel stimuli when the visual stimuli were explored with both vision and touch. The results are discussed from the perspective of how they relate to the functional differences of the manual haptic modality and the hypothesis of a "haptic bond" between visual and auditory stimuli.

  13. Auditory evoked potentials to speech and nonspeech stimuli are associated with verbal skills in preschoolers

    Directory of Open Access Journals (Sweden)

    Soila Kuuluvainen

    2016-06-01

    Full Text Available Children’s obligatory auditory event-related potentials (ERPs to speech and nonspeech sounds have been shown to associate with reading performance in children at risk or with dyslexia and their controls. However, very little is known of the cognitive processes these responses reflect. To investigate this question, we recorded ERPs to semisynthetic syllables and their acoustically matched nonspeech counterparts in 63 typically developed preschoolers, and assessed their verbal skills with an extensive set of neurocognitive tests. P1 and N2 amplitudes were larger for nonspeech than speech stimuli, whereas the opposite was true for N4. Furthermore, left-lateralized P1s were associated with better phonological and prereading skills, and larger P1s to nonspeech than speech stimuli with poorer verbal reasoning performance. Moreover, left-lateralized N2s, and equal-sized N4s to both speech and nonspeech stimuli were associated with slower naming. In contrast, children with equal-sized N2 amplitudes at left and right scalp locations, and larger N4s for speech than nonspeech stimuli, performed fastest. We discuss the possibility that children’s ERPs reflect not only neural encoding of sounds, but also sound quality processing, memory-trace construction, and lexical access. The results also corroborate previous findings that speech and nonspeech sounds are processed by at least partially distinct neural substrates.

  14. Updating fearful memories with extinction training during reconsolidation: a human study using auditory aversive stimuli.

    Directory of Open Access Journals (Sweden)

    Javiera P Oyarzún

    Full Text Available Learning to fear danger in the environment is essential to survival, but dysregulation of the fear system is at the core of many anxiety disorders. As a consequence, a great interest has emerged in developing strategies for suppressing fear memories in maladaptive cases. Recent research has focused in the process of reconsolidation where memories become labile after being retrieved. In a behavioral manipulation, Schiller et al., (2010 reported that extinction training, administrated during memory reconsolidation, could erase fear responses. The implications of this study are crucial for the possible treatment of anxiety disorders without the administration of drugs. However, attempts to replicate this effect by other groups have been so far unsuccessful. We sought out to reproduce Schiller et al., (2010 findings in a different fear conditioning paradigm based on auditory aversive stimuli instead of electric shock. Following a within-subject design, participants were conditioned to two different sounds and skin conductance response (SCR was recorded as a measure of fear. Our results demonstrated that only the conditioned stimulus that was reminded 10 minutes before extinction training did not reinstate a fear response after a reminder trial consisting of the presentation of the unconditioned stimuli. For the first time, we replicated Schiller et al., (2010 behavioral manipulation and extended it to an auditory fear conditioning paradigm.

  15. Deletion of RAGE causes hyperactivity and increased sensitivity to auditory stimuli in mice.

    Directory of Open Access Journals (Sweden)

    Seiichi Sakatani

    Full Text Available The receptor for advanced glycation end-products (RAGE is a multi-ligand receptor that belongs to the immunoglobulin superfamily of cell surface receptors. In diabetes and Alzheimer's disease, pathological progression is accelerated by activation of RAGE. However, how RAGE influences gross behavioral activity patterns in basal condition has not been addressed to date. In search for a functional role of RAGE in normal mice, a series of standard behavioral tests were performed on adult RAGE knockout (KO mice. We observed a solid increase of home cage activity in RAGE KO. In addition, auditory startle response assessment resulted in a higher sensitivity to auditory signal and increased prepulse inhibition in KO mice. There were no significant differences between KO and wild types in behavioral tests for spatial memory and anxiety, as tested by Morris water maze, classical fear conditioning, and elevated plus maze. Our results raise a possibility that systemic therapeutic treatments to occlude RAGE activation may have adverse effects on general activity levels or sensitivity to auditory stimuli.

  16. Bottlenose dolphin (Tursiops truncatus) auditory brainstem responses to frequency-modulated "chirp" stimuli.

    Science.gov (United States)

    Finneran, James J; Mulsow, Jason; Jones, Ryan; Houser, Dorian S; Burkard, Robert F

    2017-08-01

    Previous studies have demonstrated that increasing-frequency chirp stimuli (up-chirps) can enhance human auditory brainstem response (ABR) amplitudes by compensating for temporal dispersion occurring along the cochlear partition. In this study, ABRs were measured in two bottlenose dolphins (Tursiops truncatus) in response to spectrally white clicks, up-chirps, and decreasing-frequency chirps (down-chirps). Chirp durations varied from 125 to 2000 μs. For all stimuli, frequency bandwidth was constant (10-180 kHz) and peak-equivalent sound pressure levels (peSPLs) were 115, 125, and 135 dB re 1 μPa. Up-chirps with durations less than ∼1000 μs generally increased ABR peak amplitudes compared to clicks with the same peSPL or energy flux spectral density level, while down-chirps with durations from above ∼250 to 500 μs decreased ABR amplitudes relative to clicks. The findings generally mirror those from human studies and suggest that the use of chirp stimuli may be an effective way to enhance broadband ABR amplitudes in larger marine mammals.

  17. Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli I: Versus Click and Tone Burst Stimuli.

    Science.gov (United States)

    Cobb, Kensi M; Stuart, Andrew

    The purpose of the study was to generate normative auditory brainstem response (ABR) wave component peak latency and amplitude values for neonates with air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli (i.e., 500, 1000, 2000, and 4000 Hz). A second objective was to compare neonate ABRs to CE-Chirp stimuli with ABR responses to traditional click and tone burst stimuli with the same stimulus parameters. Participants were 168 healthy neonates. ABRs were obtained to air- and bone-conducted CE-Chirp and click stimuli and air-conducted CE-Chirp octave band and tone burst stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps and clicks. The effect of stimulus level was also examined with bone-conducted CE-Chirps and clicks and air-conducted CE-Chirp octave band stimuli. In general, ABR wave V amplitudes to air- and bone-conducted CE-Chirp stimuli were significantly larger (p < 0.05) than those evoked to traditional click and tone burst stimuli. Systematic statistically significant (p < 0.05) wave V latency differences existed between the air- and bone-conducted CE-Chirp and CE-Chirp octave band stimuli relative to traditional click and tone burst stimuli. ABRs to air- and bone-conducted CE-Chirps and CE-Chirp octave band stimuli may be valuable in the assessment of newborn infants. However, the prognostic value of such stimuli needs to be validated.

  18. Multisensory Training can Promote or Impede Visual Perceptual Learning of Speech Stimuli: Visual-Tactile versus Visual-Auditory Training

    Directory of Open Access Journals (Sweden)

    Silvio P Eberhardt

    2014-10-01

    Full Text Available In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that Aaudiovisual training with speech stimuli can promote auditory-only perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded auditory-only (AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning in participants whose training scores were similar. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1 Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2 Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory.

  19. On the synthesis of multiple frequency tone burst stimuli for efficient high frequency auditory brainstem response.

    Science.gov (United States)

    Ellingson, Roger M; Dille, Marilyn L; Leek, Marjorie R; Fausti, Stephen A

    2008-01-01

    The development and digital waveform synthesis of a multiple-frequency tone-burst (MFTB) stimulus is presented. The stimulus is designed to improve the efficiency of monitoring high-frequency auditory-brainstem-response (ABR) hearing thresholds. The pure-tone-based, fractional-octave-bandwidth MFTB supports frequency selective ABR audiometry with a bandwidth that falls between the conventional click and single-frequency tone-burst stimuli. The MFTB is being used to identify high frequency hearing threshold change due to ototoxic medication which most generally starts at the ultra-highest hearing frequencies and progresses downwards but could be useful in general limited-bandwidth testing applications. Included is a Mathcad implementation and analysis of our MFTB synthesis technique and sample performance measurements of the MFTB stimulus configuration used in a clinical research ABR system.

  20. Effectiveness of Earmuffs and Noise-cancelling Headphones for Coping with Hyper-reactivity to Auditory Stimuli in Children with Autism Spectrum Disorder: A Preliminary Study

    Directory of Open Access Journals (Sweden)

    Nobuhiko Ikuta

    2016-12-01

    Conclusion: This study demonstrated the effectiveness of standard earmuffs and NC headphones in helping children with ASD to cope with problem behaviours related to hyper-reactivity to auditory stimuli, therefore, children with ASD could use earmuffs to help to deal with unpleasant sensory auditory stimuli.

  1. Spike-train variability of auditory neurons in vivo: dynamic responses follow predictions from constant stimuli.

    Science.gov (United States)

    Schaette, Roland; Gollisch, Tim; Herz, Andreas V M

    2005-06-01

    Reliable accounts of the variability observed in neural spike trains are a prerequisite for the proper interpretation of neural dynamics and coding principles. Models that accurately describe neural variability over a wide range of stimulation and response patterns are therefore highly desirable, especially if they can explain this variability in terms of basic neural observables and parameters such as firing rate and refractory period. In this work, we analyze the response variability recorded in vivo from locust auditory receptor neurons under acoustic stimulation. In agreement with results from other systems, our data suggest that neural refractoriness has a strong influence on spike-train variability. We therefore explore a stochastic model of spike generation that includes refractoriness through a recovery function. Because our experimental data are consistent with a renewal process, the recovery function can be derived from a single interspike-interval histogram obtained under constant stimulation. The resulting description yields quantitatively accurate predictions of the response variability over the whole range of firing rates for constant-intensity as well as amplitude-modulated sound stimuli. Model parameters obtained from constant stimulation can be used to predict the variability in response to dynamic stimuli. These results demonstrate that key ingredients of the stochastic response dynamics of a sensory neuron are faithfully captured by a simple stochastic model framework.

  2. Hierarchical self-assembly of a fluorescence emission-enhanced organogelator and its multiple stimuli-responsive behaviors.

    Science.gov (United States)

    Ren, Yuan-Yuan; Xu, Zheng; Li, Guoqiang; Huang, Junhai; Fan, Xiaotian; Xu, Lin

    2017-01-03

    A discrete hexagonal metallacycle 1 decorated with tetraphenylethylene, amide groups and long hydrophobic alkyl chains was constructed via [3 + 3] coordination-driven self-assembly, from which the fluorescence emission-enhanced organogelator with multiple stimuli-responsiveness was successfully prepared via hierarchical self-assembly.

  3. Task-switching, inhibition and the processing of unattended auditory stimuli in music trained and non-trained adolescents and young adults

    OpenAIRE

    Mannermaa, Kristiina

    2017-01-01

    Previous research has linked music training to enhanced processing of unattended auditory stimuli as indexed by such auditory event-related potential (ERP) responses as mismatch negativity (MMN) and P3a. Music training has also been linked with enhanced cognitive abilities more generally, and executive functions have been proposed to mediate this link. The current study concentrates on the processing of unattended auditory stimuli and how this relates to two aspects of executive functions: ta...

  4. High-density EEG characterization of brain responses to auditory rhythmic stimuli during wakefulness and NREM sleep.

    Science.gov (United States)

    Lustenberger, Caroline; Patel, Yogi A; Alagapan, Sankaraleengam; Page, Jessica M; Price, Betsy; Boyle, Michael R; Fröhlich, Flavio

    2017-12-06

    Auditory rhythmic sensory stimulation modulates brain oscillations by increasing phase-locking to the temporal structure of the stimuli and by increasing the power of specific frequency bands, resulting in Auditory Steady State Responses (ASSR). The ASSR is altered in different diseases of the central nervous system such as schizophrenia. However, in order to use the ASSR as biological markers for disease states, it needs to be understood how different vigilance states and underlying brain activity affect the ASSR. Here, we compared the effects of auditory rhythmic stimuli on EEG brain activity during wake and NREM sleep, investigated the influence of the presence of dominant sleep rhythms on the ASSR, and delineated the topographical distribution of these modulations. Participants (14 healthy males, 20-33 years) completed on the same day a 60 min nap session and two 30 min wakefulness sessions (before and after the nap). During these sessions, amplitude modulated (AM) white noise auditory stimuli at different frequencies were applied. High-density EEG was continuously recorded and time-frequency analyses were performed to assess ASSR during wakefulness and NREM periods. Our analysis revealed that depending on the electrode location, stimulation frequency applied and window/frequencies analysed the ASSR was significantly modulated by sleep pressure (before and after sleep), vigilance state (wake vs. NREM sleep), and the presence of slow wave activity and sleep spindles. Furthermore, AM stimuli increased spindle activity during NREM sleep but not during wakefulness. Thus, (1) electrode location, sleep history, vigilance state and ongoing brain activity needs to be carefully considered when investigating ASSR and (2) auditory rhythmic stimuli during sleep might represent a powerful tool to boost sleep spindles. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  6. Encoding of virtual acoustic space stimuli by neurons in ferret primary auditory cortex.

    Science.gov (United States)

    Mrsic-Flogel, Thomas D; King, Andrew J; Schnupp, Jan W H

    2005-06-01

    Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies > or =4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.

  7. Test-Retest of Long Latency Auditory Evoked Potentials (P300) with Pure Tone and Speech Stimuli.

    Science.gov (United States)

    Perez, Ana Paula; Ziliotto, Karin; Pereira, Liliane Desgualdo

    2017-04-01

    Introduction Long latency auditory evoked potentials, especially P300, have been used for clinical evaluation of mental processing. Many factors can interfere with Auditory Evoked Potential - P300 results, suggesting large intra and inter-subject variations. Objective The objective of the study was to identify the reliability of P3 components (latency and amplitude) over 4-6 weeks and the most stable auditory stimulus with the best test-retest agreement. Methods Ten normal-hearing women participated in the study. Only subjects without auditory processing problems were included. To determine the P3 components, we elicited long latency auditory evoked potential (P300) by pure tone and speech stimuli, and retested after 4-6 weeks using the same parameters. We identified P300 latency and amplitude by waveform subtraction. Results We found lower coefficient of variation values in latency than in amplitude, with less variability analysis when speech stimulus was used. There was no significant correlation in latency measures between pure tone and speech stimuli, and sessions. There was a significant intrasubject correlation between measures of latency and amplitude. Conclusion These findings show that amplitude responses are more robust for the speech stimulus when compared with its pure tone counterpart. The P300 indicated stability for latency and amplitude measures when the test-retest was applied. Reliability was higher for amplitude than for latency, with better agreement when the pure tone stimulus was used. However, further research with speech stimulus is needed to clarify how these stimuli are processed by the nervous system.

  8. Processing Temporal Modulations in Binaural and Monaural Auditory Stimuli by Neurons in the Inferior Colliculus and Auditory Cortex

    OpenAIRE

    Fitzpatrick, Douglas C.; Roberts, Jason M.; Kuwada, Shigeyuki; Kim, Duck O.; Filipovic, Blagoje

    2009-01-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating inter...

  9. Analysis of the influence of memory content of auditory stimuli on the memory content of EEG signal

    OpenAIRE

    Namazi, Hamidreza; Khosrowabadi, Reza; Hussaini, Jamal; Habibi, Shaghayegh; Farid, Ali Akhavan; Vladimir V. Kulish

    2016-01-01

    One of the major challenges in brain research is to relate the structural features of the auditory stimulus to structural features of Electroencephalogram (EEG) signal. Memory content is an important feature of EEG signal and accordingly the brain. On the other hand, the memory content can also be considered in case of stimulus. Beside all works done on analysis of the effect of stimuli on human EEG and brain memory, no work discussed about the stimulus memory and also the relationship that m...

  10. Neonate Auditory Brainstem Responses to CE-Chirp and CE-Chirp Octave Band Stimuli II: Versus Adult Auditory Brainstem Responses.

    Science.gov (United States)

    Cobb, Kensi M; Stuart, Andrew

    The purpose of the study was to examine the differences in auditory brainstem response (ABR) latency and amplitude indices to the CE-Chirp stimuli in neonates versus young adults as a function of stimulus level, rate, polarity, frequency and gender. Participants were 168 healthy neonates and 20 normal-hearing young adults. ABRs were obtained to air- and bone-conducted CE-Chirps and air-conducted CE-Chirp octave band stimuli. The effects of stimulus level, rate, and polarity were examined with air-conducted CE-Chirps. The effect of stimulus level was also examined with bone-conducted CE-Chirps and CE-Chirp octave band stimuli. The effect of gender was examined across all stimulus manipulations. In general, ABR wave V amplitudes were significantly larger (p 0.05). Significant differences in ABR latencies and amplitudes exist between newborns and young adults using CE-Chirp stimuli. These differences are consistent with differences to traditional click and tone burst stimuli and reflect maturational differences as a function of age. These findings continue to emphasize the importance of interpreting ABR results using age-based normative data.

  11. Intact Spectral but Abnormal Temporal Processing of Auditory Stimuli in Autism

    Science.gov (United States)

    Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…

  12. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H.

    2015-01-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. PMID:26157026

  13. Hierarchical Organization of Auditory and Motor Representations in Speech Perception: Evidence from Searchlight Similarity Analysis.

    Science.gov (United States)

    Evans, Samuel; Davis, Matthew H

    2015-12-01

    How humans extract the identity of speech sounds from highly variable acoustic signals remains unclear. Here, we use searchlight representational similarity analysis (RSA) to localize and characterize neural representations of syllables at different levels of the hierarchically organized temporo-frontal pathways for speech perception. We asked participants to listen to spoken syllables that differed considerably in their surface acoustic form by changing speaker and degrading surface acoustics using noise-vocoding and sine wave synthesis while we recorded neural responses with functional magnetic resonance imaging. We found evidence for a graded hierarchy of abstraction across the brain. At the peak of the hierarchy, neural representations in somatomotor cortex encoded syllable identity but not surface acoustic form, at the base of the hierarchy, primary auditory cortex showed the reverse. In contrast, bilateral temporal cortex exhibited an intermediate response, encoding both syllable identity and the surface acoustic form of speech. Regions of somatomotor cortex associated with encoding syllable identity in perception were also engaged when producing the same syllables in a separate session. These findings are consistent with a hierarchical account of how variable acoustic signals are transformed into abstract representations of the identity of speech sounds. © The Author 2015. Published by Oxford University Press.

  14. Processing temporal modulations in binaural and monaural auditory stimuli by neurons in the inferior colliculus and auditory cortex.

    Science.gov (United States)

    Fitzpatrick, Douglas C; Roberts, Jason M; Kuwada, Shigeyuki; Kim, Duck O; Filipovic, Blagoje

    2009-12-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating interaural phase difference, while recording from neurons in the unanesthetized rabbit. We found that the cutoff frequency for neural synchronization to the binaural beat frequency (BBF) decreased between the IC and auditory cortex, and that this decrease was associated with an increase in the group delay. These features indicate that there is an increased temporal integration window in the cortex compared to the IC, complementing that seen with monaural signals. Comparable measurements of responses to amplitude modulation showed that the monaural and binaural temporal integration windows at the cortical level were quantitatively as well as qualitatively similar, suggesting that intrinsic membrane properties and afferent synapses to the cortical neurons govern the dynamic processing. The upper limits of synchronization to the BBF and the band-pass tuning characteristics of cortical neurons are a close match to human psychophysics.

  15. BOLD responses to tactile stimuli in visual and auditory cortex depend on the frequency content of stimulation.

    Science.gov (United States)

    Nordmark, Per F; Pruszynski, J Andrew; Johansson, Roland S

    2012-10-01

    Although some brain areas preferentially process information from a particular sensory modality, these areas can also respond to other modalities. Here we used fMRI to show that such responsiveness to tactile stimuli depends on the temporal frequency of stimulation. Participants performed a tactile threshold-tracking task where the tip of either their left or right middle finger was stimulated at 3, 20, or 100 Hz. Whole-brain analysis revealed an effect of stimulus frequency in two regions: the auditory cortex and the visual cortex. The BOLD response in the auditory cortex was stronger during stimulation at hearable frequencies (20 and 100 Hz) whereas the response in the visual cortex was suppressed at infrasonic frequencies (3 Hz). Regardless of which hand was stimulated, the frequency-dependent effects were lateralized to the left auditory cortex and the right visual cortex. Furthermore, the frequency-dependent effects in both areas were abolished when the participants performed a visual task while receiving identical tactile stimulation as in the tactile threshold-tracking task. We interpret these findings in the context of the metamodal theory of brain function, which posits that brain areas contribute to sensory processing by performing specific computations regardless of input modality.

  16. Activation of right parietal cortex during memory retrieval of nonlinguistic auditory stimuli.

    Science.gov (United States)

    Klostermann, Ellen C; Loui, Psyche; Shimamura, Arthur P

    2009-09-01

    In neuroimaging studies, the left ventral posterior parietal cortex (PPC) is particularly active during memory retrieval. However, most studies have used verbal or verbalizable stimuli. We investigated neural activations associated with the retrieval of short, agrammatical music stimuli (Blackwood, 2004), which have been largely associated with right hemisphere processing. At study, participants listened to music stimuli and rated them on pleasantness. At test, participants made old/new recognition judgments with high/low confidence ratings. Right, but not left, ventral PPC activity was observed during the retrieval of these music stimuli. Thus, rather than indicating a special status of left PPC in retrieval, both right and left ventral PPC participate in memory retrieval, depending on the type of information that is to be remembered.

  17. Multihandicapped Children's Preferences for Pure Tones and Speech Stimuli as a Method of Assessing Auditory Capabilities.

    Science.gov (United States)

    Silva, Dennis A.; And Others

    1978-01-01

    Residual hearing capabilities of nine severely and profoundly retarded deaf-blind children (7-13 years old) were determined with an operant procedure that allowed the children to respond by making a selection between two responses, one which resulted in the presentation of auditory reinforcement or one which resulted in no reinforcement.…

  18. Diminished N1 auditory evoked potentials to oddball stimuli in misophonia patients

    Directory of Open Access Journals (Sweden)

    Arjan eSchröder

    2014-04-01

    Full Text Available Misophonia (hatred of sound is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study we investigated if a dysfunction in the brain’s early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs during an oddball task.Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 Hz and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1 and P2 components locked to the onset of the tones.For misophonia patients, the N1 peak evoked by the oddball tones had a smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones.The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.

  19. Auditory brainstem responses for click and CE-chirp stimuli in individuals with and without occupational noise exposure

    Directory of Open Access Journals (Sweden)

    Zeena Venkatacheluvaiah Pushpalatha

    2016-01-01

    Full Text Available Introduction: Encoding of CE-chirp and click stimuli in auditory system was studied using auditory brainstem responses (ABRs among individuals with and without noise exposure. Materials and Methods: The study consisted of two groups. Group 1 (experimental group consisted of 20 (40 ears individuals exposed to occupational noise with hearing thresholds within 25 dB HL. They were further divided into three subgroups based on duration of noise exposure (0–5 years of exposure-T1, 5–10 years of exposure-T2, and >10 years of exposure-T3. Group 2 (control group consisted of 20 individuals (40 ears. Absolute latency and amplitude of waves I, III, and V were compared between the two groups for both click and CE-chirp stimuli. T1, T2, and T3 groups were compared for the same parameters to see the effect of noise exposure duration on CE-chirp and click ABR. Result: In Click ABR, while both the parameters for wave III were significantly poorer for the experimental group, wave V showed a significant decline in terms of amplitude only. There was no significant difference obtained for any of the parameters for wave I. In CE-Chirp ABR, the latencies for all three waves were significantly prolonged in the experimental group. However, there was a significant decrease in terms of amplitude in only wave V for the same group. Discussion: Compared to click evoked ABR, CE-Chirp ABR was found to be more sensitive in comparison of latency parameters in individuals with occupational noise exposure. Monitoring of early pathological changes at the brainstem level can be studied effectively by using CE-Chirp stimulus in comparison to click stimulus. Conclusion: This study indicates that ABR’s obtained with CE-chirp stimuli serves as an effective tool to identify the early pathological changes due to occupational noise exposure when compared to click evoked ABR.

  20. Auditory brainstem responses for click and CE-chirp stimuli in individuals with and without occupational noise exposure.

    Science.gov (United States)

    Pushpalatha, Zeena Venkatacheluvaiah; Konadath, Sreeraj

    2016-01-01

    Encoding of CE-chirp and click stimuli in auditory system was studied using auditory brainstem responses (ABRs) among individuals with and without noise exposure. The study consisted of two groups. Group 1 (experimental group) consisted of 20 (40 ears) individuals exposed to occupational noise with hearing thresholds within 25 dB HL. They were further divided into three subgroups based on duration of noise exposure (0-5 years of exposure-T1, 5-10 years of exposure-T2, and >10 years of exposure-T3). Group 2 (control group) consisted of 20 individuals (40 ears). Absolute latency and amplitude of waves I, III, and V were compared between the two groups for both click and CE-chirp stimuli. T1, T2, and T3 groups were compared for the same parameters to see the effect of noise exposure duration on CE-chirp and click ABR. In Click ABR, while both the parameters for wave III were significantly poorer for the experimental group, wave V showed a significant decline in terms of amplitude only. There was no significant difference obtained for any of the parameters for wave I. In CE-Chirp ABR, the latencies for all three waves were significantly prolonged in the experimental group. However, there was a significant decrease in terms of amplitude in only wave V for the same group. Compared to click evoked ABR, CE-Chirp ABR was found to be more sensitive in comparison of latency parameters in individuals with occupational noise exposure. Monitoring of early pathological changes at the brainstem level can be studied effectively by using CE-Chirp stimulus in comparison to click stimulus. This study indicates that ABR's obtained with CE-chirp stimuli serves as an effective tool to identify the early pathological changes due to occupational noise exposure when compared to click evoked ABR.

  1. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    Directory of Open Access Journals (Sweden)

    Susan eDenham

    2014-02-01

    Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in

  2. Responses of mink to auditory stimuli: Prerequisites for applying the ‘cognitive bias’ approach

    DEFF Research Database (Denmark)

    Svendsen, Pernille Maj; Malmkvist, Jens; Halekoh, Ulrich

    2012-01-01

    The aim of the study was to determine and validate prerequisites for applying a cognitive (judgement) bias approach to assessing welfare in farmed mink (Neovison vison). We investigated discrimination ability and associative learning ability using auditory cues. The mink (n = 15 females) were...... farmed mink in a judgement bias approach would thus appear to be feasible. However several specific issues are to be considered in order to successfully adapt a cognitive bias approach to mink, and these are discussed....

  3. The effects of two different auditory stimuli on functional arm movement in persons with Parkinson's disease: a dual-task paradigm.

    Science.gov (United States)

    Ma, Hui-Ing; Hwang, Wen-Juh; Lin, Keh-Chung

    2009-03-01

    To examine, in a dual-task paradigm, the effect of auditory stimuli on people with Parkinson's disease. A counterbalanced repeated-measures design. A motor control laboratory in a university setting. Twenty individuals with Parkinson's disease. EXPERIMENTAL CONDITIONS: Each participant did two experiments (marching music experiment and weather forecast experiment). In each experiment, the participant performed an upper extremity functional task as the primary task and listened to an auditory stimulus (marching music or weather forecast) as the concurrent task. Each experiment had three conditions: listening to the auditory stimulus, ignoring the auditory stimulus and no auditory stimulus. Kinematic variables of arm movement, including movement time, peak velocity, deceleration time and number of movement units. We found that performances of the participants were similar across the three conditions for the marching music experiment, but were significantly different for the weather forecast experiment. The comparison of condition effects between the two experiments indicated that the effect of weather forecast was (marginally) significantly greater than that of marching music. The results suggest that the type of auditory stimulus is important to the degree of interference with upper extremity performance in people with Parkinson's disease. Auditory stimuli that require semantic processing (e.g. weather forecast) may distract attention from the primary task, and thus cause a decline in performance.

  4. Temporal order perception of auditory stimuli is selectively modified by tonal and non-tonal language environments.

    Science.gov (United States)

    Bao, Yan; Szymaszek, Aneta; Wang, Xiaoying; Oron, Anna; Pöppel, Ernst; Szelag, Elzbieta

    2013-12-01

    The close relationship between temporal perception and speech processing is well established. The present study focused on the specific question whether the speech environment could influence temporal order perception in subjects whose language backgrounds are distinctively different, i.e., Chinese (tonal language) vs. Polish (non-tonal language). Temporal order thresholds were measured for both monaurally presented clicks and binaurally presented tone pairs. Whereas the click experiment showed similar order thresholds for the two language groups, the experiment with tone pairs resulted in different observations: while Chinese demonstrated better performance in discriminating the temporal order of two "close frequency" tone pairs (600 Hz and 1200 Hz), Polish subjects showed a reversed pattern, i.e., better performance for "distant frequency" tone pairs (400 Hz and 3000 Hz). These results indicate on the one hand a common temporal mechanism for perceiving the order of two monaurally presented stimuli, and on the other hand neuronal plasticity for perceiving the order of frequency-related auditory stimuli. We conclude that the auditory brain is modified with respect to temporal processing by long-term exposure to a tonal or a non-tonal language. As a consequence of such an exposure different cognitive modes of operation (analytic vs. holistic) are selected: the analytic mode is adopted for "distant frequency" tone pairs in Chinese and for "close frequency" tone pairs in Polish subjects, whereas the holistic mode is selected for "close frequency" tone pairs in Chinese and for "distant frequency" tone pairs in Polish subjects, reflecting a double dissociation of function. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  5. Effects of aging on inner ear morphology in dogs in relation to brainstem responses to toneburst auditory stimuli.

    Science.gov (United States)

    Ter Haar, G; de Groot, J C M J; Venker-van Haagen, A J; van Sluijs, F J; Smoorenburg, G F

    2009-01-01

    Age-related hearing loss (ARHL) is the most common form of hearing loss in humans and is increasingly recognized in dogs. Cochlear lesions in dogs with ARHL are similar to those in humans and the severity of the histological changes is reflected in tone audiograms. Ten geriatric dogs (mean age: 12.7 years) and three 9-month-old dogs serving as controls for histological analysis. Observational study. Auditory thresholds were determined by recording brainstem responses (BERA) to toneburst auditory stimuli (1, 2, 4, 8, 12, 16, 24, and 32 kHz). After euthanasia and perfusion fixation, the temporal bones were harvested and processed for histological examination of the cochleas. The numbers of outer hair cells (OHCs) and inner hair cells (IHCs) were counted and the spiral ganglion cell (SGC) packing density and stria vascularis cross-sectional area (SVCA) were determined. A combination of cochlear lesions was found in all geriatric dogs. There were significant reductions (P .001) in OHC (42%, 95% confidence interval [CI]; 24-64%) and IHC counts (21%, 95% CI; 62-90%) and SGC packing densities (323, 95% CI; 216-290) in the basal turn, SVCA was smaller in all turns. The greatest reduction in auditory sensitivity was at 8-32 kHz. ARHL in this specific population of geriatric dogs was comparable histologically to the mixed type of ARHL in humans. The predominance of histological changes in the basal cochlear turn was consistent with the large threshold shifts observed in the middle- to high-frequency region.

  6. Do dolphins rehearse show-stimuli when at rest? Delayed matching of auditory memory

    Directory of Open Access Journals (Sweden)

    Dorothee eKremers

    2011-12-01

    Full Text Available The mechanisms underlying vocal mimicry in animals remain an open question. Delphinidae are able to copy sounds from their environment that are not produced by conspecifics. Usually, these mimicries occur associated with the context in which they were learned. No reports address the question of separation between auditory memory formation and spontaneous vocal copying although the sensory and motor phases of vocal learning are separated in a variety of songbirds. Here we show that captive bottlenose dolphins produce, during their nighttime resting periods, non-dolphin sounds that they heard during performance shows. Generally, in the middle of the night, these animals produced vocal copies of whale sounds that had been broadcast during daily public shows. As their life history was fully known, we know that these captive dolphins had never had the opportunity to hear whale sounds before then. Moreover, recordings made before the whale sounds started being broadcast revealed that they had never emitted such sounds before. This is to our knowledge the first evidence for a separation between formation of auditory memories and the process of learning to produce calls that match these memories in a marine mammal. One hypothesis is that dolphins may rehearse some special events heard during the daytime and that they then express vocally what could be conceived as a more global memory. These results open the way for broader views on how animals might rehearse life events while resting or maybe dreaming.

  7. Arousal-related P3a to novel auditory stimuli is abolished by a moderately low alcohol dose.

    Science.gov (United States)

    Marinkovic, K; Halgren, E; Maltzman, I

    2001-01-01

    Concurrent measures of event-related potentials (ERPs) and skin conductance responses were obtained in an auditory oddball task consisting of rare target, rare non-signal unique novel and frequent standard tones. Twelve right-handed male social drinkers participated in all four cells of the balanced placebo design in which effects of beverage and instructions as to the beverage content (expectancy) were independently manipulated. The beverage contained either juice only, or vodka mixed with juice in the ratio that successfully disguised the taste of alcohol and raised average peak blood-alcohol level to 0.045% (45 mg/dl). ERPs were sensitive to adverse effects of mild inebriation, whereas behavioural measures were not affected. Alcohol ingestion reliably increased N2 amplitude and reduced the late positive complex (LPC). A large, fronto-central P3a (280 ms latency) was recorded to novel sounds in the placebo condition, but only on the trials that also evoked electrodermal-orienting responses. Both novel and target stimuli evoked a posterior P3b (340 ms), which was independent of orienting. Alcohol selectively attenuated the P3a to novel sounds on trials with autonomic arousal. This evidence confirms the previously suggested distinction between the subcomponents of the LPC: P3a may be a central index of orienting to novel, task-irrelevant but potentially significant stimuli and is an important component of the arousal system. P3b does not have a clear relationship with arousal and may embody voluntary cognitive processing of rare task-related stimuli. Overall, these results indicate that alcohol affects multiple brain systems concerned with arousal, attentional processes and cognitive-autonomic integration.

  8. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Directory of Open Access Journals (Sweden)

    Udo Will

    Full Text Available Pulse is often understood as a feature of a (quasi- isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani classical music performances (alap, jor and jhala. The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic experience and may be an important process supporting 'social' effects of temporally regular music.

  9. Pulse and entrainment to non-isochronous auditory stimuli: the case of north Indian alap.

    Science.gov (United States)

    Will, Udo; Clayton, Martin; Wertheim, Ira; Leante, Laura; Berg, Eric

    2015-01-01

    Pulse is often understood as a feature of a (quasi-) isochronous event sequence that is picked up by an entrained subject. However, entrainment does not only occur between quasi-periodic rhythms. This paper demonstrates the expression of pulse by subjects listening to non-periodic musical stimuli and investigates the processes behind this behaviour. The stimuli are extracts from the introductory sections of North Indian (Hindustani) classical music performances (alap, jor and jhala). The first of three experiments demonstrates regular motor responses to both irregular alap and more regular jor sections: responses to alap appear related to individual spontaneous tempi, while for jor they relate to the stimulus event rate. A second experiment investigated whether subjects respond to average periodicities of the alap section, and whether their responses show phase alignment to the musical events. In the third experiment we investigated responses to a broader sample of performances, testing their relationship to spontaneous tempo, and the effect of prior experience with this music. Our results suggest an entrainment model in which pulse is understood as the experience of one's internal periodicity: it is not necessarily linked to temporally regular, structured sensory input streams; it can arise spontaneously through the performance of repetitive motor actions, or on exposure to event sequences with rather irregular temporal structures. Greater regularity in the external event sequence leads to entrainment between motor responses and stimulus sequence, modifying subjects' internal periodicities in such a way that they are either identical or harmonically related to each other. This can be considered as the basis for shared (rhythmic) experience and may be an important process supporting 'social' effects of temporally regular music.

  10. Learning to Associate Auditory and Visual Stimuli: Behavioral and Neural Mechanisms

    Science.gov (United States)

    Altieri, Nicholas; Stevenson, Ryan; Wallace, Mark T.; Wenger, Michael J.

    2014-01-01

    The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time (RT) and hazard function measure known as capacity (e.g., Townsend and Ashby, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean Global Field Power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain. PMID:24276220

  11. The oscillatory activities and its synchronization in auditory-visual integration as revealed by event-related potentials to bimodal stimuli

    Science.gov (United States)

    Guo, Jia; Xu, Peng; Yao, Li; Shu, Hua; Zhao, Xiaojie

    2012-03-01

    Neural mechanism of auditory-visual speech integration is always a hot study of multi-modal perception. The articulation conveys speech information that helps detect and disambiguate the auditory speech. As important characteristic of EEG, oscillations and its synchronization have been applied to cognition research more and more. This study analyzed the EEG data acquired by unimodal and bimodal stimuli using time frequency and phase synchrony approach, investigated the oscillatory activities and its synchrony modes behind evoked potential during auditory-visual integration, in order to reveal the inherent neural integration mechanism under these modes. It was found that beta activity and its synchronization differences had relationship with gesture N1-P2, which happened in the earlier stage of speech coding to pronouncing action. Alpha oscillation and its synchronization related with auditory N1-P2 might be mainly responsible for auditory speech process caused by anticipation from gesture to sound feature. The visual gesture changing enhanced the interaction of auditory brain regions. These results provided explanations to the power and connectivity change of event-evoked oscillatory activities which matched ERPs during auditory-visual speech integration.

  12. Effects of auditory stimuli on electrical activity in the brain during cycle ergometry.

    Science.gov (United States)

    Bigliassi, Marcelo; Karageorghis, Costas I; Wright, Michael J; Orgs, Guido; Nowicky, Alexander V

    2017-08-01

    The present study sought to further understanding of the brain mechanisms that underlie the effects of music on perceptual, affective, and visceral responses during whole-body modes of exercise. Eighteen participants were administered light-to-moderate intensity bouts of cycle ergometer exercise. Each exercise bout was of 12-min duration (warm-up [3min], exercise [6min], and warm-down [3min]). Portable techniques were used to monitor the electrical activity in the brain, heart, and muscle during the administration of three conditions: music, audiobook, and control. Conditions were randomized and counterbalanced to prevent any influence of systematic order on the dependent variables. Oscillatory potentials at the Cz electrode site were used to further understanding of time-frequency changes influenced by voluntary control of movements. Spectral coherence analysis between Cz and frontal, frontal-central, central, central-parietal, and parietal electrode sites was also calculated. Perceptual and affective measures were taken at five timepoints during the exercise bout. Results indicated that music reallocated participants' attentional focus toward auditory pathways and reduced perceived exertion. The music also inhibited alpha resynchronization at the Cz electrode site and reduced the spectral coherence values at Cz-C4 and Cz-Fz. The reduced focal awareness induced by music led to a more autonomous control of cycle movements performed at light-to-moderate-intensities. Processing of interoceptive sensory cues appears to upmodulate fatigue-related sensations, increase the connectivity in the frontal and central regions of the brain, and is associated with neural resynchronization to sustain the imposed exercise intensity. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  13. [Changes of EEG power spectrum in response to the emotional auditory stimuli in patients in acute and recovery stages of TBI (traumatic brain injury)].

    Science.gov (United States)

    Portnova, G V; Gladun, K V; Sharova, E A; Ivanitskiĭ, A M

    2013-01-01

    We investigated variability of responses to emotionally important auditory stimulation in different groups of TBI (Traumatic Brain Injury) in acute state or recovery. The patients sampling consisted of three different groups: patients in coma or vegetative state, patients with Severe and Moderate TBI in recovery period. Subjects were stimulated with auditory stimuli containing important physiological sounds (coughing, vomiting), emotional sounds (laughing, crying), nature sounds (bird song, barking), unpleasant household sounds (nails scratching the glass), natural sounds (sea, rain, fire) and neutral sounds (white noise). The background encephalographic activity was registered during at least 7 minutes. EEG was recorded while using portable device "Entsefalan". Significant differences of power of the rhythmic activity registered during the presentation of different types of stimuli were analyzed using Mathlab and Statistica 6.0. Results showed that EEG-response to the emotional stimuli differed depending on consciousness level, stimuli type, severity of TBI. Most valuable changes in EEG spectrum power for a patient with TBI were found for unpleasant auditory stimulation. Responsiveness to the pleasant stimulation could be registered in later stages of coming out of coma than to unpleasant stimulation. Alpha-activity is reducing in patients with TBI: the alpha rhythm depression is most evident in the control group, less in group after moderate TBI, and even less in group after severe TBI. Patients in coma or vegetative state didn't show any response in rhythmic power in the frequency of alpha rhythm.

  14. The effects of neck flexion on cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in related sensory cortices.

    Science.gov (United States)

    Fujiwara, Katsuo; Kunita, Kenji; Kiyota, Naoe; Mammadova, Aida; Irei, Mariko

    2012-12-03

    A flexed neck posture leads to non-specific activation of the brain. Sensory evoked cerebral potentials and focal brain blood flow have been used to evaluate the activation of the sensory cortex. We investigated the effects of a flexed neck posture on the cerebral potentials evoked by visual, auditory and somatosensory stimuli and focal brain blood flow in the related sensory cortices. Twelve healthy young adults received right visual hemi-field, binaural auditory and left median nerve stimuli while sitting with the neck in a resting and flexed (20° flexion) position. Sensory evoked potentials were recorded from the right occipital region, Cz in accordance with the international 10-20 system, and 2 cm posterior from C4, during visual, auditory and somatosensory stimulations. The oxidative-hemoglobin concentration was measured in the respective sensory cortex using near-infrared spectroscopy. Latencies of the late component of all sensory evoked potentials significantly shortened, and the amplitude of auditory evoked potentials increased when the neck was in a flexed position. Oxidative-hemoglobin concentrations in the left and right visual cortices were higher during visual stimulation in the flexed neck position. The left visual cortex is responsible for receiving the visual information. In addition, oxidative-hemoglobin concentrations in the bilateral auditory cortex during auditory stimulation, and in the right somatosensory cortex during somatosensory stimulation, were higher in the flexed neck position. Visual, auditory and somatosensory pathways were activated by neck flexion. The sensory cortices were selectively activated, reflecting the modalities in sensory projection to the cerebral cortex and inter-hemispheric connections.

  15. Behavioral determination of stimulus pair discrimination of auditory acoustic and electrical stimuli using a classical conditioning and heart-rate approach.

    Science.gov (United States)

    Morgan, Simeon J; Paolini, Antonio G

    2012-06-06

    Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear

  16. Effects of aging on brainstem responses to toneburst auditory stimuli: a cross-sectional and longitudinal study in dogs.

    Science.gov (United States)

    Ter Haar, G; Venker-van Haagen, A J; van den Brom, W E; van Sluijs, F J; Smoorenburg, G F

    2008-01-01

    It is assumed that the hearing of dogs becomes impaired with advancing age, but little is known about the prevalence and electrophysiologic characteristics of presbycusis in this species. As in humans, hearing in dogs becomes impaired with aging across the entire frequency range, but primarily in the high-frequency area. This change can be assessed quantitatively by brainstem-evoked response audiometry (BERA). Three groups of 10 mixed-breed dogs with similar body weights but different mean ages were used. At the start of the study, the mean age was 1.9 years (range, 0.9-3.4) in group I, 5.7 years (3.5-7) in group II, and 12.7 years (11-14) in group III. In a cross-sectional study, the BERA audiograms obtained with toneburst stimuli were compared among the 3 groups. In a longitudinal study, changes in auditory thresholds of group II dogs were followed for 7 years. Thresholds were significantly higher in group III than in groups I and II at all frequencies tested, and higher in group II than in group I at 4 kHz. The audiograms in group II indicated a progressive increase in thresholds associated with aging starting around 8-10 years of age and most pronounced in the middle- to high-frequency region (8-32 kHz). Age-related hearing loss in these dogs started around 8-10 years of age and encompassed the entire frequency range, but started and progressed most rapidly in the middle- to high-frequency area. Its progression can be followed by BERA with frequency-specific stimulation.

  17. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  18. Multiple auditory steady-state response thresholds to bone-conduction stimuli in young infants with normal hearing.

    Science.gov (United States)

    Small, Susan A; Stapells, David R

    2006-06-01

    Multiple auditory steady-state responses (ASSRs) probably will be incorporated into the diagnostic test battery for estimating hearing thresholds in young infants in the near future. Limiting this, however, is the fact that there are no published bone-conduction ASSR threshold data for infants with normal or impaired hearing. The objective of this study was to investigate bone-conduction ASSR thresholds in infants from a Neonatal Intensive Care Unit (NICU) and in young infants with normal hearing and to compare these with adult ASSR thresholds. ASSR thresholds to multiple bone-conduction stimuli (carrier frequencies: 500 to 4000 Hz; 77 to 101-Hz modulation rates; amplitude/frequency modulated; single-polarity stimulus) were obtained in two infant groups [N = 29 preterm (32 to 43 wk PCA), tested in NICU; N = 14 postterm (0 to 8 mo), tested in sound booth]. All infants had passed a hearing screening test. ASSR thresholds, amplitudes, and phase delays for preterm and postterm infants were compared with previously collected adult data. Mean (+/-1 SD) ASSR thresholds were 16 (11), 16 (10), 37 (10), and 33 (13) dB HL for the preterm infants and 14 (13), 2 (7), 26 (6), and 22 (8) dB HL for the postterm infants at 500, 1000, 2000, and 4000 Hz, respectively. Both infant groups had significantly better thresholds for 500 and 1000 Hz compared with 2000 and 4000 Hz, in contrast to adults who have similar thresholds across frequency (22, 26, 18, and 18 dB HL). When 500- and 1000-Hz thresholds were pooled, pre- and postterm infants had better low-frequency thresholds than adults. When 2000- and 4000-Hz thresholds were pooled, pre- and postterm infants had poorer thresholds than adults. ASSR amplitudes were significantly larger for low frequencies compared with high frequencies for both infant groups, in contrast to adults, who show little difference across frequency. ASSR phase delays were later for lower frequencies compared with higher frequencies for infants and adults

  19. Directionality of auditory nerve fiber responses to pure tone stimuli in the grassfrog, Rana temporaria. II. Spike timing

    DEFF Research Database (Denmark)

    Jørgensen, M B; Christensen-Dalsgaard, J

    1997-01-01

    We studied the directionality of spike timing in the responses of single auditory nerve fibers of the grass frog, Rana temporaria, to tone burst stimulation. Both the latency of the first spike after stimulus onset and the preferred firing phase during the stimulus were studied. In addition, the ...

  20. Neural representation of the acoustic biotope. A comparison of the response of auditory neurons to tonal and natural stimuli in the cat.

    Science.gov (United States)

    Smolders, J W; Aertsen, A M; Johannesma, P I

    1979-11-01

    Cats were stimulated with tones and with natural sounds selected from the normal acoustic environment of the animal. Neural activity evoked by the natural sounds and tones was recorded in the cochlear nucleus and in the medial geniculate body. The set of biological sounds proved to be effective in influencing neural activity of single cells at both levels in the auditory system. At the level of the cochlear nucleus the response of a neuron evoked by a natural sound stimulus could be understood reasonably well on the basis of the structure of the spectrograms of the natural sounds and the unit's responses to tones. At the level of the medial geniculate body analysis with tones did not provide sufficient information to explain the responses to natural sounds. At this level the use of an ensemble of natural sound stimuli allows the investigation of neural properties, which are not seen by analysis with simple artificial stimuli. Guidelines for the construction of an ensemble of complex natural sound stimuli, based on the ecology and ethology of the animal under investigation are discussed. This stimulus ensemble is defined as the Acoustic Biotope.

  1. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment.

    Science.gov (United States)

    Bathelt, Joe; Dale, Naomi; de Haan, Michelle

    2017-10-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve). Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    Directory of Open Access Journals (Sweden)

    Joe Bathelt

    2017-10-01

    Full Text Available Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenital visual disorders of the anterior visual system (globe of the eye, retina, anterior optic nerve. Early-latency event-related potential responses showed no difference between the VI and control group, suggesting similar initial auditory processing. However, the mean amplitude over central and right frontal channels between 280 and 320 ms was reduced in response to own-name stimuli, but not control stimuli, in children with VI suggesting differences in social processing. Children with VI also showed an increased rate of autistic-related behaviours, pragmatic language deficits, as well as peer relationship and emotional problems on standard parent questionnaires. These findings suggest that vision may be necessary for the typical development of social processing across modalities.

  3. Influência dos contrastes de fala nos potenciais evocados auditivos corticais The influence of speech stimuli contrast in cortical auditory evoked potentials

    Directory of Open Access Journals (Sweden)

    Kátia de Freitas Alvarenga

    2013-06-01

    Full Text Available Estudos voltados aos potenciais evocados auditivos com estímulos de fala em indivíduos ouvintes são importantes para compreender como a complexidade do estímulo influencia nas características do potencial cognitivo auditivo gerado. OBJETIVO: Caracterizar o potencial evocado auditivo cortical e o potencial cognitivo auditivo P3 com estímulos de contrastes vocálico e consonantal em indivíduos com audição normal. MÉTODO: Participaram deste estudo 31 indivíduos sem alterações auditivas, neurológicas e de linguagem na faixa etária de 7 a 30 anos. Os potenciais evocados auditivos corticais e cognitivo auditivo P3 foram registrados nos canais ativos Fz e Cz utilizando-se os contrastes de fala consonantal (/ba/-/da/ e vocálico (/i/-/a/. Desenho: Estudo de coorte, transversal e prospectivo. RESULTADOS: Houve diferença entre o contraste de fala utilizado e as latências dos componentes N2 (p = 0,00 e P3 (p = 0,00, assim como entre o canal ativo considerado (Fz/Cz e os valores de latência e amplitude de P3. Estas diferenças não ocorreram para os componentes exógenos N1 e P2. CONCLUSÃO: O contraste do estímulo de fala, vocálico ou consonantal, deve ser considerado na análise do potencial evocado cortical, componente N2, e do potencial cognitivo auditivo P3.Studies about cortical auditory evoked potentials using the speech stimuli in normal hearing individuals are important for understanding how the complexity of the stimulus influences the characteristics of the cortical potential generated. OBJECTIVE: To characterize the cortical auditory evoked potential and the P3 auditory cognitive potential with the vocalic and consonantal contrast stimuli in normally hearing individuals. METHOD: 31 individuals with no risk for hearing, neurologic and language alterations, in the age range between 7 and 30 years, participated in this study. The cortical auditory evoked potentials and the P3 auditory cognitive one were recorded in the Fz and Cz

  4. An auditory multiclass brain-computer interface with natural stimuli: usability evaluation with healthy participants and a motor impaired end user

    Directory of Open Access Journals (Sweden)

    Nadine eSimon

    2015-01-01

    Full Text Available Brain-computer interfaces (BCIs can serve as muscle independent communication aids. Persons, who are unable to control their eye muscles (e.g. in the completely locked-in state or have severe visual impairments for other reasons, need BCI systems that do not rely on the visual modality. For this reason, BCIs that employ auditory stimuli were suggested. In this study, a multiclass BCI spelling system was implemented that uses animal voices with directional cues to code rows and columns of a letter matrix. To reveal possible training effects with the system, 11 healthy participants performed spelling tasks on two consecutive days. In a second step, the system was tested by a participant with amyotrophic lateral sclerosis (ALS in two sessions. In the first session, healthy participants spelled with an average accuracy of 76% (3.29 bits/min that increased to 90% (4.23 bits/min on the second day. Spelling accuracy by the participant with ALS was 20% in the first and 47% in the second session. The results indicate a strong training effect for both the healthy participants and the participant with ALS. While healthy participants reached high accuracies in the first session and second session, accuracies for the participant with ALS were not sufficient for satisfactory communication in both sessions. More training sessions might be needed to improve spelling accuracies. The study demonstrated the feasibility of the auditory BCI with healthy users and stresses the importance of training with auditory multiclass BCIs, especially for potential end-users of BCI with disease.

  5. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  6. Knockdown of the dyslexia-associated gene Kiaa0319 impairs temporal responses to speech stimuli in rat primary auditory cortex.

    Science.gov (United States)

    Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P

    2014-07-01

    One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  7. Effects of background noise on inter-trial phase coherence and auditory N1-P2 responses to speech stimuli.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2015-10-01

    This study investigated the effects of a speech-babble background noise on inter-trial phase coherence (ITPC, also referred to as phase locking value (PLV)) and auditory event-related responses (AERP) to speech sounds. Specifically, we analyzed EEG data from 11 normal hearing subjects to examine whether ITPC can predict noise-induced variations in the obligatory N1-P2 complex response. N1-P2 amplitude and latency data were obtained for the /bu/syllable in quiet and noise listening conditions. ITPC data in delta, theta, and alpha frequency bands were calculated for the N1-P2 responses in the two passive listening conditions. Consistent with previous studies, background noise produced significant amplitude reduction and latency increase in N1 and P2, which were accompanied by significant ITPC decreases in all the three frequency bands. Correlation analyses further revealed that variations in ITPC were able to predict the amplitude and latency variations in N1-P2. The results suggest that trial-by-trial analysis of cortical neural synchrony is a valuable tool in understanding the modulatory effects of background noise on AERP measures. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Quantitative Electromyographic Analysis of Reaction Time to External Auditory Stimuli in Drug-Naïve Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Do-Young Kwon

    2014-01-01

    Full Text Available Evaluation of motor symptoms in Parkinson’s disease (PD is still based on clinical rating scales by clinicians. Reaction time (RT is the time interval between a specific stimulus and the start of muscle response. The aim of this study was to identify the characteristics of RT responses in PD patients using electromyography (EMG and to elucidate the relationship between RT and clinical features of PD. The EMG activity of 31 PD patients was recorded during isometric muscle contraction. RT was defined as the time latency between an auditory beep and responsive EMG activity. PD patients demonstrated significant delays in both initiation and termination of muscle contraction compared with controls. Cardinal motor symptoms of PD were closely correlated with RT. RT was longer in more-affected side and in more-advanced PD stages. Frontal cognitive function, which is indicative of motor programming and movement regulation and perseveration, was also closely related with RT. In conclusion, greater RT is the characteristic motor features of PD and it could be used as a sensitive tool for motor function assessment in PD patients. Further investigations are required to clarify the clinical impact of the RT on the activity of daily living of patients with PD.

  9. Functional MRI of auditory responses in the zebra finch forebrain reveals a hierarchical organisation based on signal strength but not selectivity.

    Directory of Open Access Journals (Sweden)

    Tiny Boumans

    Full Text Available BACKGROUND: Male songbirds learn their songs from an adult tutor when they are young. A network of brain nuclei known as the 'song system' is the likely neural substrate for sensorimotor learning and production of song, but the neural networks involved in processing the auditory feedback signals necessary for song learning and maintenance remain unknown. Determining which regions show preferential responsiveness to the bird's own song (BOS is of great importance because neurons sensitive to self-generated vocalisations could mediate this auditory feedback process. Neurons in the song nuclei and in a secondary auditory area, the caudal medial mesopallium (CMM, show selective responses to the BOS. The aim of the present study is to investigate the emergence of BOS selectivity within the network of primary auditory sub-regions in the avian pallium. METHODS AND FINDINGS: Using blood oxygen level-dependent (BOLD fMRI, we investigated neural responsiveness to natural and manipulated self-generated vocalisations and compared the selectivity for BOS and conspecific song in different sub-regions of the thalamo-recipient area Field L. Zebra finch males were exposed to conspecific song, BOS and to synthetic variations on BOS that differed in spectro-temporal and/or modulation phase structure. We found significant differences in the strength of BOLD responses between regions L2a, L2b and CMM, but no inter-stimuli differences within regions. In particular, we have shown that the overall signal strength to song and synthetic variations thereof was different within two sub-regions of Field L2: zone L2a was significantly more activated compared to the adjacent sub-region L2b. CONCLUSIONS: Based on our results we suggest that unlike nuclei in the song system, sub-regions in the primary auditory pallium do not show selectivity for the BOS, but appear to show different levels of activity with exposure to any sound according to their place in the auditory

  10. The sensory channel of presentation alters subjective ratings and autonomic responses toward disgusting stimuli-Blood pressure, heart rate and skin conductance in response to visual, auditory, haptic and olfactory presented disgusting stimuli

    National Research Council Canada - National Science Library

    Croy, Ilona; Laqua, Kerstin; Süß, Frank; Joraschky, Peter; Ziemssen, Tjalf; Hummel, Thomas

    2013-01-01

    .... Therefore, disgust experience evoked by four different sensory channels was compared. A total of 119 participants received 3 different disgusting and one control stimulus, each presented through the visual, auditory, tactile, and olfactory channel...

  11. Quantitative study of plasticity in the auditory nuclei of chick under conditions of prenatal sound attenuation and overstimulation with species specific and music sound stimuli.

    Science.gov (United States)

    Wadhwa, S; Anand, P; Bhowmick, D

    1999-06-01

    Morphological effects of prenatal sound attenuation and sound overstimulation by species specific and music sounds on the brainstem auditory nuclei of chick have been evaluated quantitatively. Changes in length, volume, neuron number, size of neuronal nuclei and glial numbers of second and third order auditory nuclei, n. magnocellularis (NM) and n. laminaris (NL), were determined from thionine-stained serial sections of control and experimental groups on posthatch day 1 using stereological methods. Significant increase in volume of both auditory nuclei attributable to increase in length of nucleus, number and size of neurons, number of glia as well as neuropil was observed in response to both species specific and music overstimulation given during the critical period of development. The enhanced development of auditory nuclei in response to enriched environment prenatally indicates a positive effect of activity on neurons which may have clinical implications in addition to providing explanation for preference to auditory cues in the postnatal life. Reduction in neuron number with a small increase in proportion of cell nuclei of large size as well as an increase in glial numbers was seen in both NM and NL of the prenatally sound attenuated chicks. The increase in size of some neuronal nuclei may probably be evidence of enhanced synthesis of proteins involved in cell death or an attempt at recovery. The dissociated response of neurons and glia under sound attenuated and auditory stimulated conditions suggests that they are independently regulated by activity-dependent signals with glia also being under influence of other signals for a role in removal of dead cell debris.

  12. Near-Independent Capacities and Highly Constrained Output Orders in the Simultaneous Free Recall of Auditory-Verbal and Visuo-Spatial Stimuli

    Science.gov (United States)

    Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff

    2018-01-01

    Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…

  13. AUDITORY REACTION TIME IN BASKETBALL PLAYERS AND HEALTHY CONTROLS

    OpenAIRE

    Ghuntla Tejas P.; Mehta Hemant B.; Gokhale Pradnya A.; Shah Chinmay J.

    2013-01-01

    Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time...

  14. Event-related potential response to auditory social stimuli, parent-reported social communicative deficits and autism risk in school-aged children with congenital visual impairment

    OpenAIRE

    Joe Bathelt; Naomi Dale; Michelle de Haan

    2017-01-01

    Communication with visual signals, like facial expression, is important in early social development, but the question if these signals are necessary for typical social development remains to be addressed. The potential impact on social development of being born with no or very low levels of vision is therefore of high theoretical and clinical interest. The current study investigated event-related potential responses to basic social stimuli in a rare group of school-aged children with congenit...

  15. Time-varying auditory gain control in response to double-pulse stimuli in harbour porpoises is not mediated by a stapedial reflex

    Directory of Open Access Journals (Sweden)

    Asger Emil Munch Schrøder

    2017-04-01

    Full Text Available Echolocating animals reduce their output level and hearing sensitivity with decreasing echo delays, presumably to stabilize the perceived echo intensity during target approaches. In bats, this variation in hearing sensitivity is formed by a call-induced stapedial reflex that tapers off over time after the call. Here, we test the hypothesis that a similar mechanism exists in toothed whales by subjecting a trained harbour porpoise to a series of double sound pulses varying in delay and frequency, while measuring the magnitudes of the evoked auditory brainstem responses (ABRs. We find that the recovery of the ABR to the second pulse is frequency dependent, and that a stapedial reflex therefore cannot account for the reduced hearing sensitivity at short pulse delays. We propose that toothed whale auditory time-varying gain control during echolocation is not enabled by the middle ear as in bats, but rather by frequency-dependent mechanisms such as forward masking and perhaps higher-order control of efferent feedback to the outer hair cells.

  16. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  17. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  18. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  19. Preschool-Age Children and Adults Flexibly Shift Their Preferences for Auditory versus Visual Modalities but Do Not Exhibit Auditory Dominance

    Science.gov (United States)

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    The goal of this study was to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study was motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for…

  20. Psychophysics of Complex Auditory and Speech Stimuli

    Science.gov (United States)

    1993-10-31

    K0th mitiil nie.ian pitch nvitcher, were asli li1) I/I higher than predicted HIo wesecr. in this valrian iI Octave illusion, listeners perceived...Theory Single Feature Search: - Red (valid) w Orange (invalid) "m X (valid) m Y (invalid) Conlunction Search: 0 = Red Ŕ" (valid) Y = Orange "Y" (invalid

  1. Psychophysics of complex auditory and speech stimuli

    Science.gov (United States)

    Pastore, Richard E.

    1993-10-01

    A major focus on the primary project is the use of different procedures to provide converging evidence on the nature of perceptual spaces for speech categories. Completed research examined initial voiced consonants, with results providing strong evidence that different stimulus properties may cue a phoneme category in different vowel contexts. Thus, /b/ is cued by a rising second format (F2) with the vowel /a/, requiring both F2 and F3 to be rising with /i/, and is independent of the release burst for these vowels. Furthermore, cues for phonetic contrasts are not necessarily symmetric, and the strong dependence of prior speech research on classification procedures may have led to errors. Thus, the opposite (falling F2 and F3) transitions lead somewhat ambiguous percepts (i.e., not /b/) which may be labeled consistently (as /d/ or /g/), but requires a release burst to achieve high category quality and similarity to category exemplars. Ongoing research is examining cues in other vowel contexts and issuing procedures to evaluate the nature of interaction between cues for categories of both speech and music.

  2. Hierarchical classification as relational framing.

    Science.gov (United States)

    Slattery, Brian; Stewart, Ian

    2014-01-01

    The purpose of this study was to model hierarchical classification as contextually controlled, generalized relational responding or relational framing. In Experiment 1, a training procedure involving nonarbitrarily related multidimensional stimuli was used to establish two arbitrary shapes as contextual cues for 'member of' and 'includes' relational responding, respectively. Subsequently those cues were used to establish a network of arbitrary stimuli in particular hierarchical relations with each other, and then test for derivation of further untrained hierarchical relations as well as for transformation of functions. Resultant patterns of relational framing showed properties of transitive class containment, asymmetrical class containment, and unilateral property induction, consistent with conceptions of hierarchical classification as described within the cognitive developmental literature. Experiment 2 extended the basic model by using "fuzzy category" stimuli and providing a better controlled test of transformation of functions. Limitations and future research directions are discussed. © Society for the Experimental Analysis of Behavior.

  3. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  4. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  5. The subjective duration of audiovisual looming and receding stimuli.

    Science.gov (United States)

    Grassi, Massimo; Pavan, Andrea

    2012-08-01

    Looming visual stimuli (log-increasing in proximal size over time) and auditory stimuli (of increasing sound intensity over time) have been shown to be perceived as longer than receding visual and auditory stimuli (i.e., looming stimuli reversed in time). Here, we investigated whether such asymmetry in subjective duration also occurs for audiovisual looming and receding stimuli, as well as for stationary stimuli (i.e., stimuli that do not change in size and/or intensity over time). Our results showed a great temporal asymmetry in audition but a null asymmetry in vision. In contrast, the asymmetry in audiovision was moderate, suggesting that multisensory percepts arise from the integration of unimodal percepts in a maximum-likelihood fashion.

  6. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  7. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  8. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Auditory and visual spatial impression: Recent studies of three auditoria

    Science.gov (United States)

    Nguyen, Andy; Cabrera, Densil

    2004-10-01

    Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.

  10. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  11. Auditory-Visual Speech Integration by Adults with and without Language-Learning Disabilities

    Science.gov (United States)

    Norrix, Linda W.; Plante, Elena; Vance, Rebecca

    2006-01-01

    Auditory and auditory-visual (AV) speech perception skills were examined in adults with and without language-learning disabilities (LLD). The AV stimuli consisted of congruent consonant-vowel syllables (auditory and visual syllables matched in terms of syllable being produced) and incongruent McGurk syllables (auditory syllable differed from…

  12. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  13. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  14. Looming biases in monkey auditory cortex.

    Science.gov (United States)

    Maier, Joost X; Ghazanfar, Asif A

    2007-04-11

    Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.

  15. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    OpenAIRE

    Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...

  16. Comparison of auditory deficits associated with neglect and auditory cortex lesions.

    Science.gov (United States)

    Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia

    2012-04-01

    In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  18. Preschool children and adults flexibly shift their preferences for auditory versus visual modalities, but do not exhibit auditory dominance

    Science.gov (United States)

    Noles, Nicholaus S.; Gelman, Susan A.

    2012-01-01

    The goal of the present study is to evaluate the claim that young children display preferences for auditory stimuli over visual stimuli. This study is motivated by concerns that the visual stimuli employed in prior studies were considerably more complex and less distinctive than the competing auditory stimuli, resulting in an illusory preference for auditory cues. Across three experiments, preschool children and adults were trained to use paired audio-visual cues to predict the location of a target. At test, the cues were switched so that auditory cues indicated one location and visual cues indicated the opposite location. In contrast to prior studies, preschool age children did not exhibit auditory dominance. Instead, children and adults flexibly shifted their preferences as a function of the degree of contrast within each modality (with high contrast leading to greater use). PMID:22513210

  19. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte

    2013-01-01

    they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG......, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent...

  20. Abnormal synchrony and effective connectivity in patients with schizophrenia and auditory hallucinations

    Directory of Open Access Journals (Sweden)

    Maria de la Iglesia-Vaya

    2014-01-01

    These data indicate that an anomalous process of neural connectivity exists when patients with AH process emotional auditory stimuli. Additionally, a central role is suggested for the cerebellum in processing emotional stimuli in patients with persistent AH.

  1. Effects of boar stimuli on the follicular phase and on oestrous behaviour in sows

    NARCIS (Netherlands)

    Langendijk, P.; Soede, N.M.; Kemp, B.

    2006-01-01

    This review describes the role of boar stimuli in receptive behaviour, and the influence of boar stimuli during the follicular phase. Receptive behaviour (standing response) in an oestrous sow is elicited by boar stimuli, which can be olfactory, auditory, tactile, or visual. The relative importance

  2. Hierarchical photocatalysts.

    Science.gov (United States)

    Li, Xin; Yu, Jiaguo; Jaroniec, Mietek

    2016-05-07

    As a green and sustainable technology, semiconductor-based heterogeneous photocatalysis has received much attention in the last few decades because it has potential to solve both energy and environmental problems. To achieve efficient photocatalysts, various hierarchical semiconductors have been designed and fabricated at the micro/nanometer scale in recent years. This review presents a critical appraisal of fabrication methods, growth mechanisms and applications of advanced hierarchical photocatalysts. Especially, the different synthesis strategies such as two-step templating, in situ template-sacrificial dissolution, self-templating method, in situ template-free assembly, chemically induced self-transformation and post-synthesis treatment are highlighted. Finally, some important applications including photocatalytic degradation of pollutants, photocatalytic H2 production and photocatalytic CO2 reduction are reviewed. A thorough assessment of the progress made in photocatalysis may open new opportunities in designing highly effective hierarchical photocatalysts for advanced applications ranging from thermal catalysis, separation and purification processes to solar cells.

  3. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  4. Increased Auditory Startle Reflex in Children with Functional Abdominal Pain

    NARCIS (Netherlands)

    Bakker, Mirte J.; Boer, Frits; Benninga, Marc A.; Koelman, Johannes H. T. M.; Tijssen, Marina A. J.

    Objective To test the hypothesis that children with abdominal pain-related functional gastrointestinal disorders have a general hypersensitivity for sensory stimuli. Study design Auditory startle reflexes were assessed in 20 children classified according to Rome III classifications of abdominal

  5. Auditory Stimulus Equivalence and Non-Arbitrary Relations

    National Research Council Canada - National Science Library

    Stewart, Ian; Lavelle, Niamh

    2013-01-01

    This study extended previous research on stimulus equivalence with all auditory stimuli by using a methodology more similar to conventional match-to-sample training and testing for three 3-member equivalence relations...

  6. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  7. Low-frequency versus high-frequency synchronisation in chirp-evoked auditory brainstem responses

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Gøtsche-Rasmussen, Kristian

    2011-01-01

    This study investigates the frequency specific contribution to the auditory brainstem response (ABR) of chirp stimuli. Frequency rising chirps were designed to compensate for the cochlear traveling wave delay, and lead to larger wave-V amplitudes than for click stimuli as more auditory nerve fibres...

  8. Comparação dos estímulos clique e CE-chirp® no registro do Potencial Evocado Auditivo de Tronco Encefálico Comparison of click and CE-chirp® stimuli on Brainstem Auditory Evoked Potential recording

    Directory of Open Access Journals (Sweden)

    Gabriela Ribeiro Ivo Rodrigues

    2012-12-01

    Full Text Available OBJETIVO: Comparar as latências e as amplitudes da onda V no registro do Potencial Evocado Auditivo de Tronco Encefálico (PEATE com os estímulos clique e CE-chirp® e a presença ou ausência das ondas I, III e V em fortes intensidades. MÉTODOS: Estudo transversal com 12 adultos com limiares audiométricos PURPOSE: To compare the latencies and amplitudes of wave V on the Brainstem Auditory Evoked Potential (BAEP recording obtained with click and CE-chirp® stimuli and the presence or absence of waves I, III and V in high intensities. METHODS: Cross-sectional study with 12 adults with audiometric thresholds <15 dBHL (24 ears and mean age of 27 years. The parameters used for the recording with both stimuli in intensities of 80, 60, 40, 20 dBnHL were alternate polarity and repetition rate of 27.1 Hz. RESULTS: The CE-chirp® latencies for wave V were longer than click latencies at low intensity levels (20 and 40 dBnHL. At high intensity levels (60 and 80 dBnHL, the opposite occurred. Larger wave V amplitudes were observed with CE-chirp® in all intensity levels, except at 80 dBnHL. CONCLUSION: The CE-chirp® showed shorter latencies than those observed with clicks at high intensity levels and larger amplitudes at all intensity levels, except at 80 dBnHL. The waves I and III tended to disappear with CE-chirp® stimulation.

  9. Hierarchical XP

    OpenAIRE

    Jacobi, Carsten; Rumpe, Bernhard

    2014-01-01

    XP is a light-weight methodology suited particularly for small-sized teams that develop software which has only vague or rapidly changing requirements. The discipline of systems engineering knows it as approach of incremental system change or also of "muddling through". In this paper, we introduce three well known methods of reorganizing companies, namely, the holistic approach, the incremental approach, and the hierarchical approach. We show similarities between software engineering methods ...

  10. Continuity of visual and auditory rhythms influences sensorimotor coordination.

    Directory of Open Access Journals (Sweden)

    Manuel Varlet

    Full Text Available People often coordinate their movement with visual and auditory environmental rhythms. Previous research showed better performances when coordinating with auditory compared to visual stimuli, and with bimodal compared to unimodal stimuli. However, these results have been demonstrated with discrete rhythms and it is possible that such effects depend on the continuity of the stimulus rhythms (i.e., whether they are discrete or continuous. The aim of the current study was to investigate the influence of the continuity of visual and auditory rhythms on sensorimotor coordination. We examined the dynamics of synchronized oscillations of a wrist pendulum with auditory and visual rhythms at different frequencies, which were either unimodal or bimodal and discrete or continuous. Specifically, the stimuli used were a light flash, a fading light, a short tone and a frequency-modulated tone. The results demonstrate that the continuity of the stimulus rhythms strongly influences visual and auditory motor coordination. Participants' movement led continuous stimuli and followed discrete stimuli. Asymmetries between the half-cycles of the movement in term of duration and nonlinearity of the trajectory occurred with slower discrete rhythms. Furthermore, the results show that the differences of performance between visual and auditory modalities depend on the continuity of the stimulus rhythms as indicated by movements closer to the instructed coordination for the auditory modality when coordinating with discrete stimuli. The results also indicate that visual and auditory rhythms are integrated together in order to better coordinate irrespective of their continuity, as indicated by less variable coordination closer to the instructed pattern. Generally, the findings have important implications for understanding how we coordinate our movements with visual and auditory environmental rhythms in everyday life.

  11. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  12. Exposure to Virtual Social Stimuli Modulates Subjective Pain Reports

    Directory of Open Access Journals (Sweden)

    Jacob M Vigil

    2014-01-01

    Full Text Available BACKGROUND: Contextual factors, including the gender of researchers, influence experimental and patient pain reports. It is currently not known how social stimuli influence pain percepts, nor which types of sensory modalities of communication, such as auditory, visual or olfactory cues associated with person perception and gender processing, produce these effects.

  13. Infants' Preferential Attention to Sung and Spoken Stimuli

    Science.gov (United States)

    Costa-Giomi, Eugenia; Ilari, Beatriz

    2014-01-01

    Caregivers and early childhood teachers all over the world use singing and speech to elicit and maintain infants' attention. Research comparing infants' preferential attention to music and speech is inconclusive regarding their responses to these two types of auditory stimuli, with one study showing a music bias and another one…

  14. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  15. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  16. Neural mechanisms of auditory categorization: from across brain areas to within local microcircuits

    Directory of Open Access Journals (Sweden)

    Joji eTsunada

    2014-06-01

    Full Text Available Categorization enables listeners to efficiently encode and respond to auditory stimuli. Behavioral evidence for auditory categorization has been well documented across a broad range of human and non-human animal species. Moreover, neural correlates of auditory categorization have been documented in a variety of different brain regions in the ventral auditory pathway, which is thought to underlie auditory-object processing and auditory perception. Here, we review and discuss how neural representations of auditory categories are transformed across different scales of neural organization in the ventral auditory pathway: from across different brain areas to within local microcircuits. We propose different neural transformations across different scales of neural organization in auditory categorization. Along the ascending auditory system in the ventral pathway, there is a progression in the encoding of categories from simple acoustic categories to categories for abstract information. On the other hand, in local microcircuits, different classes of neurons differentially compute categorical information.

  17. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  18. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  19. Neurophysiological investigation of idiopathic acquired auditory-visual synesthesia.

    Science.gov (United States)

    Afra, Pegah; Anderson, Jeffrey; Funke, Michael; Johnson, Michael; Matsuo, Fumisuke; Constantino, Tawnya; Warner, Judith

    2012-01-01

    We present a case of acquired auditory-visual synesthesia and its neurophysiological investigation in a healthy 42-year-old woman. She started experiencing persistent positive and intermittent negative visual phenomena at age 37 followed by auditory-visual synesthesia. Her neurophysiological investigation included video-EEG, fMRI, and MEG. Auditory stimuli (700 Hz, 50 ms duration, 0.5 s ISI) were presented binaurally at 60 db above the hearing threshold in a dark room. The patient had bilateral symmetrical auditory-evoked neuromagnetic responses followed by an occipital-evoked field 16.3 ms later. The activation of occipital cortex following auditory stimuli may represent recruitment of existing cross-modal sensory pathways.

  20. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  1. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  2. Cross-modal preference acquisition: Evaluative conditioning of pictures by affective olfactory and auditory cues.

    NARCIS (Netherlands)

    van Reekum, C.M.; van den Berg, H.; Frijda, N.H.

    1999-01-01

    A cross-modal paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were

  3. Selective integration of auditory-visual looming cues by humans.

    Science.gov (United States)

    Cappe, Céline; Thut, Gregor; Romei, Vincenzo; Murray, Micah M

    2009-03-01

    An object's motion relative to an observer can confer ethologically meaningful information. Approaching or looming stimuli can signal threats/collisions to be avoided or prey to be confronted, whereas receding stimuli can signal successful escape or failed pursuit. Using movement detection and subjective ratings, we investigated the multisensory integration of looming and receding auditory and visual information by humans. While prior research has demonstrated a perceptual bias for unisensory and more recently multisensory looming stimuli, none has investigated whether there is integration of looming signals between modalities. Our findings reveal selective integration of multisensory looming stimuli. Performance was significantly enhanced for looming stimuli over all other multisensory conditions. Contrasts with static multisensory conditions indicate that only multisensory looming stimuli resulted in facilitation beyond that induced by the sheer presence of auditory-visual stimuli. Controlling for variation in physical energy replicated the advantage for multisensory looming stimuli. Finally, only looming stimuli exhibited a negative linear relationship between enhancement indices for detection speed and for subjective ratings. Maximal detection speed was attained when motion perception was already robust under unisensory conditions. The preferential integration of multisensory looming stimuli highlights that complex ethologically salient stimuli likely require synergistic cooperation between existing principles of multisensory integration. A new conceptualization of the neurophysiologic mechanisms mediating real-world multisensory perceptions and action is therefore supported.

  4. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  5. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  6. Cardiorespiratory interactions to external stimuli.

    Science.gov (United States)

    Bernardi, L; Porta, C; Spicuzza, L; Sleight, P

    2005-09-01

    Respiration is a powerful modulator of heart rate variability, and of baro- or chemo-reflex sensitivity. This occurs via a mechanical effect of breathing that synchronizes all cardiovascular variables at the respiratory rhythm, particularly when this occurs at a particular slow rate coincident with the Mayer waves in arterial pressure (approximately 6 cycles/min). Recitation of the rosary prayer (or of most mantras), induces a marked enhancement of these slow rhythms, whereas random verbalization or random breathing does not. This phenomenon in turn increases baroreflex sensitivity and reduces chemoreflex sensitivity, leading to increases in parasympathetic and reductions in sympathetic activity. The opposite can be seen during either verbalization or mental stress tests. Qualitatively similar effects can be obtained even by passive listening to more or less rhythmic auditory stimuli, such as music, and the speed of the rhythm (rather than the style) appears to be one of the main determinants of the cardiovascular and respiratory responses. These findings have clinical relevance. Appropriate modulation of breathing, can improve/restore autonomic control of cardiovascular and respiratory systems in relevant diseases such as hypertension and heart failure, and might therefore help improving exercise tolerance, quality of life, and ultimately, survival.

  7. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression.

    Science.gov (United States)

    Lepping, Rebecca J; Atchley, Ruth Ann; Chrysikou, Evangelia; Martin, Laura E; Clair, Alicia A; Ingram, Rick E; Simmons, W Kyle; Savage, Cary R

    2016-01-01

    Anterior cingulate cortex (ACC) and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD). Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI) and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression. Nineteen MDD and 20 never-depressed (ND) control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning. ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum. These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments.

  8. Neural Processing of Emotional Musical and Nonmusical Stimuli in Depression.

    Directory of Open Access Journals (Sweden)

    Rebecca J Lepping

    Full Text Available Anterior cingulate cortex (ACC and striatum are part of the emotional neural circuitry implicated in major depressive disorder (MDD. Music is often used for emotion regulation, and pleasurable music listening activates the dopaminergic system in the brain, including the ACC. The present study uses functional MRI (fMRI and an emotional nonmusical and musical stimuli paradigm to examine how neural processing of emotionally provocative auditory stimuli is altered within the ACC and striatum in depression.Nineteen MDD and 20 never-depressed (ND control participants listened to standardized positive and negative emotional musical and nonmusical stimuli during fMRI scanning and gave subjective ratings of valence and arousal following scanning.ND participants exhibited greater activation to positive versus negative stimuli in ventral ACC. When compared with ND participants, MDD participants showed a different pattern of activation in ACC. In the rostral part of the ACC, ND participants showed greater activation for positive information, while MDD participants showed greater activation to negative information. In dorsal ACC, the pattern of activation distinguished between the types of stimuli, with ND participants showing greater activation to music compared to nonmusical stimuli, while MDD participants showed greater activation to nonmusical stimuli, with the greatest response to negative nonmusical stimuli. No group differences were found in striatum.These results suggest that people with depression may process emotional auditory stimuli differently based on both the type of stimulation and the emotional content of that stimulation. This raises the possibility that music may be useful in retraining ACC function, potentially leading to more effective and targeted treatments.

  9. Dopaminergic medication alters auditory distractor processing in Parkinson's disease.

    Science.gov (United States)

    Georgiev, Dejan; Jahanshahi, Marjan; Dreo, Jurij; Čuš, Anja; Pirtošek, Zvezdan; Repovš, Grega

    2015-03-01

    Parkinson's disease (PD) patients show signs of cognitive impairment, such as executive dysfunction, working memory problems and attentional disturbances, even in the early stages of the disease. Though motor symptoms of the disease are often successfully addressed by dopaminergic medication, it still remains unclear, how dopaminergic therapy affects cognitive function. The main objective of this study was to assess the effect of dopaminergic medication on visual and auditory attentional processing. 14 PD patients and 13 matched healthy controls performed a three-stimulus auditory and visual oddball task while their EEG was recorded. The patients performed the task twice, once on- and once off-medication. While the results showed no significant differences between PD patients and controls, they did reveal a significant increase in P3 amplitude on- vs. off-medication specific to processing of auditory distractors and no other stimuli. These results indicate significant effect of dopaminergic therapy on processing of distracting auditory stimuli. With a lack of between group differences the effect could reflect either 1) improved recruitment of attentional resources to auditory distractors; 2) reduced ability for cognitive inhibition of auditory distractors; 3) increased response to distractor stimuli resulting in impaired cognitive performance; or 4) hindered ability to discriminate between auditory distractors and targets. Further studies are needed to differentiate between these possibilities. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. fMRI of the auditory system: understanding the neural basis of auditory gestalt.

    Science.gov (United States)

    Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich

    2003-12-01

    Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.

  11. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  12. Sleep Disrupts High-Level Speech Parsing Despite Significant Basic Auditory Processing.

    Science.gov (United States)

    Makov, Shiri; Sharon, Omer; Ding, Nai; Ben-Shachar, Michal; Nir, Yuval; Zion Golumbic, Elana

    2017-08-09

    The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep.SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech

  13. Auditory presentation and synchronization in Adobe Flash and HTML5/JavaScript Web experiments.

    Science.gov (United States)

    Reimers, Stian; Stewart, Neil

    2016-09-01

    Substantial recent research has examined the accuracy of presentation durations and response time measurements for visually presented stimuli in Web-based experiments, with a general conclusion that accuracy is acceptable for most kinds of experiments. However, many areas of behavioral research use auditory stimuli instead of, or in addition to, visual stimuli. Much less is known about auditory accuracy using standard Web-based testing procedures. We used a millisecond-accurate Black Box Toolkit to measure the actual durations of auditory stimuli and the synchronization of auditory and visual presentation onsets. We examined the distribution of timings for 100 presentations of auditory and visual stimuli across two computers with difference specs, three commonly used browsers, and code written in either Adobe Flash or JavaScript. We also examined different coding options for attempting to synchronize the auditory and visual onsets. Overall, we found that auditory durations were very consistent, but that the lags between visual and auditory onsets varied substantially across browsers and computer systems.

  14. The division of attention and the human auditory evoked potential

    Science.gov (United States)

    Hink, R. F.; Van Voorhis, S. T.; Hillyard, S. A.; Smith, T. S.

    1977-01-01

    The sensitivity of the scalp-recorded, auditory evoked potential to selective attention was examined while subjects responded to stimuli presented to one ear (focused attention) and to both ears (divided attention). The amplitude of the N1 component was found to be largest to stimuli in the ear upon which attention was to be focused, smallest to stimuli in the ear to be ignored, and intermediate to stimuli in both ears when attention was divided. The results are interpreted as supporting a capacity model of attention.

  15. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Effects of Auditory Information on Self-Motion Perception during Simultaneous Presentation of Visual Shearing Motion

    Directory of Open Access Journals (Sweden)

    Shigehito eTanahashi

    2015-06-01

    Full Text Available Recent studies have found that self-motion perception induced by simultaneous presentation of visual and auditory motion is facilitated when the directions of visual and auditory motion stimuli are identical. They did not, however, examine possible contributions of auditory motion information for determining direction of self-motion perception. To examine this, a visual stimulus projected on a hemisphere screen and an auditory stimulus presented through headphones were presented separately or simultaneously, depending on experimental conditions. The participant continuously indicated the direction and strength of self-motion during the 130-s experimental trial. When the visual stimulus with a horizontal shearing rotation and the auditory stimulus with a horizontal one-directional rotation were presented simultaneously, the duration and strength of self-motion perceived in the opposite direction of the auditory rotation stimulus were significantly longer and stronger than those perceived in the same direction of the auditory rotation stimulus. However, the auditory stimulus alone could not sufficiently induce self-motion perception, and if it did, its direction was not consistent within each experimental trial. We concluded that auditory motion information can determine perceived direction of self-motion during simultaneous presentation of visual and auditory motion information, at least when visual stimuli moved in opposing directions (around the yaw-axis. We speculate that the contribution of auditory information depends on the plausibility and information balance of visual and auditory information.

  17. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  18. Hidden Hearing Loss and Computational Models of the Auditory Pathway: Predicting Speech Intelligibility Decline

    Science.gov (United States)

    2016-11-28

    Thomas F. Quatieri* *Massachusetts Institute of Technology Lincoln Laboratory, Lexington MA, USA A common complaint of listeners with normal...auditory nerve (AN) responses to speech stimuli under a variety of difficult listening conditions. The resulting cochlear neurogram, a spectrogram

  19. Effects of Auditory and Visual Priming on the Identification of Spoken Words.

    Science.gov (United States)

    Shigeno, Sumi

    2017-04-01

    This study examined the effects of preceding contextual stimuli, either auditory or visual, on the identification of spoken target words. Fifty-one participants (29% males, 71% females; mean age = 24.5 years, SD = 8.5) were divided into three groups: no context, auditory context, and visual context. All target stimuli were spoken words masked with white noise. The relationships between the context and target stimuli were as follows: identical word, similar word, and unrelated word. Participants presented with context experienced a sequence of six context stimuli in the form of either spoken words or photographs. Auditory and visual context conditions produced similar results, but the auditory context aided word identification more than the visual context in the similar word relationship. We discuss these results in the light of top-down processing, motor theory, and the phonological system of language.

  20. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  1. An interactive model of auditory-motor speech perception.

    Science.gov (United States)

    Liebenthal, Einat; Möttönen, Riikka

    2017-12-18

    Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Auditory/visual distance estimation: accuracy and variability

    Directory of Open Access Journals (Sweden)

    Paul Wallace Anderson

    2014-10-01

    Full Text Available Past research has shown that auditory distance estimation improves when listeners are given the opportunity to see all possible sound sources when compared to no visual input. It has also been established that distance estimation is more accurate in vision than in audition. The present study investigates the degree to which auditory distance estimation is improved when matched with a congruent visual stimulus. Virtual sound sources based on binaural room impulse response (BRIR measurements made from distances ranging from approximately 0.3 to 9.8 m in a concert hall were used as auditory stimuli. Visual stimuli were photographs taken from the listener’s perspective at each distance in the impulse response measurement setup presented on a large HDTV monitor. Listeners were asked to estimate egocentric distance to the sound source in each of three conditions: auditory only (A, visual only (V, and congruent auditory/visual stimuli (A+V. Each condition was presented within its own block. Sixty-two listeners were tested in order to quantify the response variability inherent in auditory distance perception. Distance estimates from both the V and A+V conditions were found to be considerably more accurate and less variable than estimates from the A condition.

  3. Responses of Neurons in the Marmoset Primary Auditory Cortex to Interaural Level Differences: Comparison of Pure Tones and Vocalizations.

    Directory of Open Access Journals (Sweden)

    Leo L Lui

    2015-04-01

    Full Text Available Interaural level differences (ILDs are the dominant cue for localizing the sources of high frequency sounds that differ in azimuth. Neurons in the primary auditory cortex (A1 respond differentially to ILDs of simple stimuli such as tones and noise bands, but the extent to which this applies to complex natural sounds, such as vocalizations, is not known. In sufentanil/N2O anaesthetized marmosets, we compared the responses of 76 A1 neurons to three vocalizations (Ock, Tsik and Twitter and pure tones at cells’ characteristic frequency. Each stimulus was presented with ILDs ranging from 20dB favouring the contralateral ear to 20dB favouring the ipsilateral ear to cover most of the frontal azimuthal space. The response to each stimulus was tested at three average binaural levels (ABLs. Most neurons were sensitive to ILDs of vocalizations and pure tones. For all stimuli, the majority of cells had monotonic ILD sensitivity functions favouring the contralateral ear, but we also observed ILD sensitivity functions that peaked near the midline and functions favouring the ipsilateral ear. Representation of ILD in A1 was better for pure tones and the Ock vocalization in comparison to the Tsik and Twitter calls; this was reflected by higher discrimination indices and greater modulation ranges. ILD sensitivity was heavily dependent on ABL: changes in ABL by ±20 dB SPL from the optimal level for ILD sensitivity led to significant decreases in ILD sensitivity for all stimuli, although ILD sensitivity to pure tones and Ock calls was most robust to such ABL changes. Our results demonstrate differences in ILD coding for pure tones and vocalizations, showing that ILD sensitivity in A1 to complex sounds cannot be simply extrapolated from that to pure tones. They also show A1 neurons do not show level-invariant representation of ILD, suggesting that such a representation of auditory space is likely to require population coding, and further processing at subsequent

  4. Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.

    Science.gov (United States)

    Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin

    2017-02-01

    The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Neural Entrainment to Auditory Imagery of Rhythms

    Directory of Open Access Journals (Sweden)

    Haruki Okawa

    2017-10-01

    Full Text Available A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.

  6. Auditory-motor coupling affects phonetic encoding.

    Science.gov (United States)

    Schmidt-Kassow, Maren; Thöne, Katharina; Kaiser, Jochen

    2017-11-27

    Recent studies have shown that moving in synchrony with auditory stimuli boosts attention allocation and verbal learning. Furthermore rhythmic tones are processed more efficiently than temporally random tones ('timing effect'), and this effect is increased when participants actively synchronize their motor performance with the rhythm of the tones, resulting in auditory-motor synchronization. Here, we investigated whether this applies also to sequences of linguistic stimuli (syllables). We compared temporally irregular syllable sequences with two temporally regular conditions where either the interval between syllable onsets (stimulus onset asynchrony, SOA) or the interval between the syllables' vowel onsets was kept constant. Entrainment to the stimulus presentation frequency (1 Hz) and event-related potentials were assessed in 24 adults who were instructed to detect pre-defined deviant syllables while they either pedaled or sat still on a stationary exercise bike. We found larger 1 Hz entrainment and P300 amplitudes for the SOA presentation during motor activity. Furthermore, the magnitude of the P300 component correlated with the motor variability in the SOA condition and 1 Hz entrainment, while in turn 1 Hz entrainment correlated with auditory-motor synchronization performance. These findings demonstrate that acute auditory-motor coupling facilitates phonetic encoding. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. [Application of simultaneous auditory evoked potentials and functional magnetic resonance recordings for examination of central auditory system--preliminary results].

    Science.gov (United States)

    Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk

    2011-01-01

    Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI

  8. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  9. Human auditory evoked potentials. II - Effects of attention

    Science.gov (United States)

    Picton, T. W.; Hillyard, S. A.

    1974-01-01

    Attention directed toward auditory stimuli, in order to detect an occasional fainter 'signal' stimulus, caused a substantial increase in the N1 (83 msec) and P2 (161 msec) components of the auditory evoked potential without any change in preceding components. This evidence shows that human auditory attention is not mediated by a peripheral gating mechanism. The evoked response to the detected signal stimulus also contained a large P3 (450 msec) wave that was topographically distinct from the preceding components. This late positive wave could also be recorded in response to a detected omitted stimulus in a regular train and therefore seemed to index a stimulus-independent perceptual decision process.

  10. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  11. Functional sex differences in human primary auditory cortex.

    Science.gov (United States)

    Ruytjens, Liesbet; Georgiadis, Janniko R; Holstege, Gert; Wit, Hero P; Albers, Frans W J; Willemsen, Antoon T M

    2007-12-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies.

  12. The role of modality : Auditory and visual distractors in Stroop interference

    NARCIS (Netherlands)

    Elliott, Emily M.; Morey, Candice C.; Morey, Richard D.; Eaves, Sharon D.; Shelton, Jill Talley; Lutfi-Proctor, Danielle A.

    2014-01-01

    As a commonly used measure of selective attention, it is important to understand the factors contributing to interference in the Stroop task. The current research examined distracting stimuli in the auditory and visual modalities to determine whether the use of auditory distractors would create

  13. Direct Contribution of Auditory Motion Information to Sound-Induced Visual Motion Perception

    Directory of Open Access Journals (Sweden)

    Souta Hidaka

    2011-10-01

    Full Text Available We have recently demonstrated that alternating left-right sound sources induce motion perception to static visual stimuli along the horizontal plane (SIVM: sound-induced visual motion perception, Hidaka et al., 2009. The aim of the current study was to elucidate whether auditory motion signals, rather than auditory positional signals, can directly contribute to the SIVM. We presented static visual flashes at retinal locations outside the fovea together with a lateral auditory motion provided by a virtual stereo noise source smoothly shifting in the horizontal plane. The flashes appeared to move in the situation where auditory positional information would have little influence on the perceived position of visual stimuli; the spatiotemporal position of the flashes was in the middle of the auditory motion trajectory. Furthermore, the auditory motion altered visual motion perception in a global motion display; in this display, different localized motion signals of multiple visual stimuli were combined to produce a coherent visual motion perception so that there was no clear one-to-one correspondence between the auditory stimuli and each visual stimulus. These findings suggest the existence of direct interactions between the auditory and visual modalities in motion processing and motion perception.

  14. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  15. Achilles’ Ear? Inferior Human Short-Term and Recognition Memory in the Auditory Modality

    OpenAIRE

    James Bigelow; Amy Poremba

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention o...

  16. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  17. Methodological challenges and solutions in auditory functional magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Jonathan E Peelle

    2014-08-01

    Full Text Available Functional magnetic resonance imaging (fMRI studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.

  18. Methodological challenges and solutions in auditory functional magnetic resonance imaging.

    Science.gov (United States)

    Peelle, Jonathan E

    2014-01-01

    Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.

  19. Anatomical Pathways for Auditory Memory in Primates

    Directory of Open Access Journals (Sweden)

    Monica Munoz-Lopez

    2010-10-01

    Full Text Available Episodic memory or the ability to store context-rich information about everyday events depends on the hippocampal formation (entorhinal cortex, subiculum, presubiculum, parasubiculum, hippocampus proper, and dentate gyrus. A substantial amount of behavioral-lesion and anatomical studies have contributed to our understanding of the organization of how visual stimuli are retained in episodic memory. However, whether auditory memory is organized similarly is still unclear. One hypothesis is that, like the ‘visual ventral stream’ for which the connections of the inferior temporal gyrus with the perirhinal cortex are necessary for visual recognition in monkeys, direct connections between the auditory association areas of the superior temporal gyrus and the hippocampal formation and with the parahippocampal region (temporal pole, perhirinal, and posterior parahippocampal cortices might also underlie recognition memory for sounds. Alternatively, the anatomical organization of memory could be different in audition. This alternative ‘indirect stream’ hypothesis posits that, unlike the visual association cortex, the majority of auditory association cortex makes one or more synapses in intermediate, polymodal areas, where they may integrate information from other sensory modalities, before reaching the medial temporal memory system. This review considers anatomical studies that can support either one or both hypotheses – focusing on anatomical studies on the primate brain that have reported not only direct auditory association connections with medial temporal areas, but, importantly, also possible indirect pathways for auditory information to reach the medial temporal lobe memory system.

  20. Perceptual Sensitivity and Response to Strong Stimuli Are Related

    Directory of Open Access Journals (Sweden)

    Anna C. Bolders

    2017-09-01

    Full Text Available To shed new light on the long-standing debate about the (independence of sensitivity to weak stimuli and overreactivity to strong stimuli, we examined the relation between these tendencies within the neurobehavioral framework of the Predictive and Reactive Control Systems (PARCS theory (Tops et al., 2010, 2014. Whereas previous studies only considered overreactivity in terms of the individual tendency to experience unpleasant affect (punishment reactivity resulting from strong sensory stimulation, we also took the individual tendency to experience pleasant affect (reward reactivity resulting from strong sensory stimulation into account. According to PARCS theory, these temperamental tendencies overlap in terms of high reactivity toward stimulation, but oppose each other in terms of the response orientation (approach or avoid. PARCS theory predicts that both types of reactivity to strong stimuli relate to sensitivity to weak stimuli, but that these relationships are suppressed due to the opposing relationship between reward and punishment reactivity. We measured punishment and reward reactivity to strong stimuli and sensitivity to weak stimuli using scales from the Adult Temperament Questionnaire (Evans and Rothbart, 2007. Sensitivity was also measured more objectively using the masked auditory threshold. We found that sensitivity to weak stimuli (both self-reported and objectively assessed was positively associated with self-reported punishment and reward reactivity to strong stimuli, but only when these reactivity measures were controlled for each other, implicating a mutual suppression effect. These results are in line with PARCS theory and suggest that sensitivity to weak stimuli and overreactivity are dependent, but this dependency is likely to be obscured if punishment and reward reactivity are not both taken into account.

  1. Efficacy of auditory training in elderly subjects

    Directory of Open Access Journals (Sweden)

    Aline Albuquerque Morais

    2015-05-01

    Full Text Available Auditory training (AT  has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test-retest effects. Thus, we investigated the efficacy of short-term auditory training (acoustically controlled auditory training - ACAT in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with APD received an initial evaluation (evaluation 1 - E1 consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group [ACG (n=8] underwent placebo training. The Passive Control Group [PCG (n=8] did not receive any intervention. After 12 weeks, the subjects were  revaluated (evaluation 2 - E2. Then, all of the subjects underwent ACAT. Following another 12 weeks (8 training sessions, they underwent the final evaluation (evaluation 3 – E3. There was no significant difference between E1 and E2 in the behavioral test [F(9.6=0,.6 p=0.92, λ de Wilks=0.65] or P300 [F(8.7=2.11, p=0.17, λ de Wilks=0.29] (discarding the presence of placebo effects and test-retest. A significant improvement was observed between the pre- and post-ACAT conditions (E2 and E3 for all auditory skills according to the behavioral methods [F(4.27=0.18, p=0.94, λ de Wilks=0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.

  2. Hemispheric asymmetry in the auditory facilitation effect in dual-stream rapid serial visual presentation tasks.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Takeshima

    Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.

  3. Exposure to virtual social stimuli modulates subjective pain reports.

    Science.gov (United States)

    Vigil, Jacob M; Torres, Daniel; Wolff, Alexander; Hughes, Katy

    2014-01-01

    Contextual factors, including the gender of researchers, influence experimental and patient pain reports. It is currently not known how social stimuli influence pain percepts, nor which types of sensory modalities of communication, such as auditory, visual or olfactory cues associated with person perception and gender processing, produce these effects. To determine whether exposure to two forms of social stimuli (audio and visual) from a virtual male or female stranger modulates cold pressor task (CPT) pain reports. Participants with similar demographic characteristics conducted a CPT in solitude, without the physical presence of an experimenter or another person. During the CPT, participants were exposed to the voice and image of a virtual male or female stranger. The voices had analogous vocal prosody, provided no semantic information (spoken in a foreign language) and differed only in pitch; the images depicted a middle-age male or female health care practitioner. Male participants, but not females, showed higher CPT pain intensity when they were exposed to the female stimuli compared with the male stimuli. Follow-up analyses showed that the association between the social stimuli and variability in pain sensitivity was not moderated by individual differences in subjective (eg, self-image) or objective measurements of one's physical stature. The findings show that exposure to virtual, gender-based auditory and visual social stimuli influences exogenous pain sensitivity. Further research on how contextual factors, such as the vocal properties of health care examiners and exposure to background voices, may influence momentary pain perception is necessary for creating more standardized methods for measuring patient pain reports in clinical settings.

  4. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  6. Auditory responsive naming versus visual confrontation naming in dementia.

    Science.gov (United States)

    Miller, Kimberly M; Finney, Glen R; Meador, Kimford J; Loring, David W

    2010-01-01

    Dysnomia is typically assessed during neuropsychological evaluation through visual confrontation naming. Responsive naming to description, however, has been shown to have a more distributed representation in both fMRI and cortical stimulation studies. While naming deficits are common in dementia, the relative sensitivity of visual confrontation versus auditory responsive naming has not been directly investigated. The current study compared visual confrontation naming and auditory responsive naming in a dementia sample of mixed etiologies to examine patterns of performance across these naming tasks. A total of 50 patients with dementia of various etiologies were administered visual confrontation naming and auditory responsive naming tasks using stimuli that were matched in overall word frequency. Patients performed significantly worse on auditory responsive naming than visual confrontation naming. Additionally, patients with mixed Alzheimer's disease/vascular dementia performed more poorly on auditory responsive naming than did patients with probable Alzheimer's disease, although no group differences were seen on the visual confrontation naming task. Auditory responsive naming correlated with a larger number of neuropsychological tests of executive function than did visual confrontation naming. Auditory responsive naming appears to be more sensitive to effects of increased of lesion burden compared to visual confrontation naming. We believe that this reflects more widespread topographical distribution of auditory naming sites within the temporal lobe, but may also reflect the contributions of working memory and cognitive flexibility to performance.

  7. The auditory startle response in post-traumatic stress disorder

    NARCIS (Netherlands)

    Siegelaar, S. E.; Olff, M.; Bour, L. J.; Veelo, D.; Zwinderman, A. H.; van Bruggen, G.; de Vries, G. J.; Raabe, S.; Cupido, C.; Koelman, J. H. T. M.; Tijssen, M. A. J.

    Post-traumatic stress disorder (PTSD) patients are considered to have excessive EMG responses in the orbicularis oculi (OO) muscle and excessive autonomic responses to startling stimuli. The aim of the present study was to gain more insight into the pattern of the generalized auditory startle reflex

  8. Auditory Space Perception in Left- and Right-Handers

    Science.gov (United States)

    Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg

    2010-01-01

    Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…

  9. Functional imaging of the central auditory system using PET

    NARCIS (Netherlands)

    Ruytjens, L.; Willemsen, A. T. M.; Van Dijk, P.; Wit, H. P.; Albers, F. W. J.

    2006-01-01

    In the last few decades functional neuroimaging tools have emerged to study the function of the human brain in vivo. These techniques have increased the knowledge of how the brain processes stimuli of different sensory modalities, including auditory processing. Positron emission tomography (PET) has

  10. Context, Contrast, and Tone of Voice in Auditory Sarcasm Perception

    Science.gov (United States)

    Voyer, Daniel; Thibodeau, Sophie-Hélène; Delong, Breanna J.

    2016-01-01

    Four experiments were conducted to investigate the interplay between context and tone of voice in the perception of sarcasm. These experiments emphasized the role of contrast effects in sarcasm perception exclusively by means of auditory stimuli whereas most past research has relied on written material. In all experiments, a positive or negative…

  11. Lateralization of Auditory rhythm length in temporal lobe lessions

    NARCIS (Netherlands)

    Alpherts, W.C.J.; Vermeulen, J.; Franken, M.L.O.; Hendriks, M.P.H.; Veelen, C.W.M. van; Rijen, P.C. van

    2002-01-01

    In the visual modality, short rhythmic stimuli ha c been proven to he better processed (sequentially) by the left hemisphere, while longer rhythms appear to he better (holistically) processed by the right hemisphere. This study was set up to see it the same holds in the auditory modality. The rhythm

  12. Evaluating conditioning of related and unrelated stimuli using a compound test.

    Science.gov (United States)

    Rescorla, Robert A

    2008-05-01

    Three experiments used a compound test procedure to evaluate whether superior conditioning results from the pairing of stimuli that are related to each other. In each case, a stimulus compound was tested after its component conditioned stimuli (CSs) had been conditioned by the same unconditioned stimuli (USs) arranged such that either related or unrelated CSs and USs were paired. Experiment 1 explored auditory and gustatory stimuli conditioned by LiCl or shock, using rats. Experiments 2 and 3 used second-order conditioning in pigeons to pair stimuli that were similar by virtue either of qualitative features or of shared physical location. In each case, the compound test provided clear evidence that pairing related stimuli produces superior associative learning.

  13. Auditory and visual novelty processing in normally-developing Kenyan children

    Science.gov (United States)

    Kihara, Michael; Hogan, Alexandra M.; Newton, Charles R.; Garrashi, Harrun H.; Neville, Brian R.; de Haan, Michelle

    2010-01-01

    Objective The aim of this study was to describe the normative development of the electrophysiological response to auditory and visual novelty in children living in rural Kenya. Methods We examined event-related potentials (ERPs) elicited by novel auditory and visual stimuli in 178 normally-developing children aged 4–12 years (86 boys, mean 6.7 years, SD 1.8 years and 92 girls, mean 6.6 years, SD 1.5 years) who were living in rural Kenya. Results The latency of early components (auditory P1 and visual N170) decreased with age and their amplitudes also tended to decrease with age. The changes in longer-latency components (Auditory N2, P3a and visual Nc, P3a) were more modality-specific; the N2 amplitude to novel stimuli decreased with age and the auditory P3a increased in both latency and amplitude with age. The Nc amplitude decreased with age while visual P3a amplitude tended to increase, though not linearly. Conclusions The changes in the timing and magnitude of early-latency ERPs likely reflect brain maturational processes. The age-related changes to auditory stimuli generally occurred later than those to visual stimuli suggesting that visual processing matures faster than auditory processing. Significance ERPs may be used to assess children’s cognitive development in rural areas of Africa. PMID:20080442

  14. A visual or tactile signal makes auditory speech detection more efficient by reducing uncertainty.

    Science.gov (United States)

    Tjan, Bosco S; Chao, Ewen; Bernstein, Lynne E

    2014-04-01

    Acoustic speech is easier to detect in noise when the talker can be seen. This finding could be explained by integration of multisensory inputs or refinement of auditory processing from visual guidance. In two experiments, we studied two-interval forced-choice detection of an auditory 'ba' in acoustic noise, paired with various visual and tactile stimuli that were identically presented in the two observation intervals. Detection thresholds were reduced under the multisensory conditions vs. the auditory-only condition, even though the visual and/or tactile stimuli alone could not inform the correct response. Results were analysed relative to an ideal observer for which intrinsic (internal) noise and efficiency were independent contributors to detection sensitivity. Across experiments, intrinsic noise was unaffected by the multisensory stimuli, arguing against the merging (integrating) of multisensory inputs into a unitary speech signal, but sampling efficiency was increased to varying degrees, supporting refinement of knowledge about the auditory stimulus. The steepness of the psychometric functions decreased with increasing sampling efficiency, suggesting that the 'task-irrelevant' visual and tactile stimuli reduced uncertainty about the acoustic signal. Visible speech was not superior for enhancing auditory speech detection. Our results reject multisensory neuronal integration and speech-specific neural processing as explanations for the enhanced auditory speech detection under noisy conditions. Instead, they support a more rudimentary form of multisensory interaction: the otherwise task-irrelevant sensory systems inform the auditory system about when to listen. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Neural Correlates of Realistic and Unrealistic Auditory Space Perception

    Directory of Open Access Journals (Sweden)

    Akiko Callan

    2011-10-01

    Full Text Available Binaural recordings can simulate externalized auditory space perception over headphones. However, if the orientation of the recorder's head and the orientation of the listener's head are incongruent, the simulated auditory space is not realistic. For example, if a person lying flat on a bed listens to an environmental sound that was recorded by microphones inserted in ears of a person who was in an upright position, the sound simulates an auditory space rotated 90 degrees to the real-world horizontal axis. Our question is whether brain activation patterns are different between the unrealistic auditory space (ie, the orientation of the listener's head and the orientation of the recorder's head are incongruent and the realistic auditory space (ie, the orientations are congruent. River sounds that were binaurally recorded either in a supine position or in an upright body position were served as auditory stimuli. During fMRI experiments, participants listen to the stimuli and pressed one of two buttons indicating the direction of the water flow (horizontal/vertical. Behavioral results indicated that participants could not differentiate between the congruent and the incongruent conditions. However, neuroimaging results showed that the congruent condition activated the planum temporale significantly more than the incongruent condition.

  16. Auditory abnormalities in autism: toward functional distinctions among findings.

    Science.gov (United States)

    Kellerman, Gabriella R; Fan, Jin; Gorman, Jack M

    2005-09-01

    Recently, findings on a wide range of auditory abnormalities among individuals with autism have been reported. To date, functional distinctions among these varied findings are poorly established. Such distinctions should be of interest to clinicians and researchers alike given their potential therapeutic and experimental applications. This review suggests three general trends among these findings as a starting point for future analyses. First, studies of auditory perception of linguistic and social auditory stimuli among individuals with autism generally have found impaired perception versus normal controls. Such findings may correlate with impaired language and communication skills and social isolation observed among individuals with autism. Second, studies of auditory perception of pitch and music among individuals with autism generally have found enhanced perception versus normal controls. These findings may correlate with the restrictive and highly focused behaviors observed among individuals with autism. Third, findings on the auditory perception of non-linguistic, non-musical stimuli among autism patients resist any generalized conclusions. Ultimately, as some researchers have already suggested, the distinction between impaired global processing and enhanced local processing may prove useful in making sense of apparently discordant findings on auditory abnormalities among individuals with autism.

  17. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  18. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  19. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  20. When a photograph can be heard: vision activates the auditory cortex within 110 ms.

    Science.gov (United States)

    Proverbio, Alice Mado; D'Aniello, Guido Edoardo; Adorni, Roberta; Zani, Alberto

    2011-01-01

    As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after approximately 200 ms.

  1. Action video games improve reading abilities and visual-to-auditory attentional shifting in English-speaking children with dyslexia

    National Research Council Canada - National Science Library

    Sandro Franceschini; Piergiorgio Trevisan; Luca Ronconi; Sara Bertoni; Susan Colmar; Kit Double; Andrea Facoetti; Simone Gori

    2017-01-01

    .... In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two...

  2. Transcranial Random Noise Stimulation (tRNS Shapes the Processing of Rapidly Changing Auditory Information

    Directory of Open Access Journals (Sweden)

    Katharina S. Rufener

    2017-06-01

    Full Text Available Neural oscillations in the gamma range are the dominant rhythmic activation pattern in the human auditory cortex. These gamma oscillations are functionally relevant for the processing of rapidly changing acoustic information in both speech and non-speech sounds. Accordingly, there is a tight link between the temporal resolution ability of the auditory system and inherent neural gamma oscillations. Transcranial random noise stimulation (tRNS has been demonstrated to specifically increase gamma oscillation in the human auditory cortex. However, neither the physiological mechanisms of tRNS nor the behavioral consequences of this intervention are completely understood. In the present study we stimulated the human auditory cortex bilaterally with tRNS while EEG was continuously measured. Modulations in the participants’ temporal and spectral resolution ability were investigated by means of a gap detection task and a pitch discrimination task. Compared to sham, auditory tRNS increased the detection rate for near-threshold stimuli in the temporal domain only, while no such effect was present for the discrimination of spectral features. Behavioral findings were paralleled by reduced peak latencies of the P50 and N1 component of the auditory event-related potentials (ERP indicating an impact on early sensory processing. The facilitating effect of tRNS was limited to the processing of near-threshold stimuli while stimuli clearly below and above the individual perception threshold were not affected by tRNS. This non-linear relationship between the signal-to-noise level of the presented stimuli and the effect of stimulation further qualifies stochastic resonance (SR as the underlying mechanism of tRNS on auditory processing. Our results demonstrate a tRNS related improvement in acoustic perception of time critical auditory information and, thus, provide further indices that auditory tRNS can amplify the resonance frequency of the auditory system.

  3. Reduced object related negativity response indicates impaired auditory scene analysis in adults with autistic spectrum disorder

    Directory of Open Access Journals (Sweden)

    Veema Lodhia

    2014-02-01

    Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.

  4. Experience-dependent enhancement of pitch-specific responses in the auditory cortex is limited to acceleration rates in normal voice range.

    Science.gov (United States)

    Krishnan, A; Gandour, J T; Suresh, C H

    2015-09-10

    The aim of this study is to determine how pitch acceleration rates within and outside the normal pitch range may influence latency and amplitude of cortical pitch-specific responses (CPR) as a function of language experience (Chinese, English). Responses were elicited from a set of four pitch stimuli chosen to represent a range of acceleration rates (two each inside and outside the normal voice range) imposed on the high rising Mandarin Tone 2. Pitch-relevant neural activity, as reflected in the latency and amplitude of scalp-recorded CPR components, varied depending on language-experience and pitch acceleration of dynamic, time-varying pitch contours. Peak latencies of CPR components were shorter in the Chinese than the English group across stimuli. Chinese participants showed greater amplitude than English for CPR components at both frontocentral and temporal electrode sites in response to pitch contours with acceleration rates inside the normal voice pitch range as compared to pitch contours with acceleration rates that exceed the normal range. As indexed by CPR amplitude at the temporal sites, a rightward asymmetry was observed for the Chinese group only. Only over the right temporal site was amplitude greater in the Chinese group relative to the English. These findings may suggest that the neural mechanism(s) underlying processing of pitch in the right auditory cortex reflect experience-dependent modulation of sensitivity to acceleration in just those rising pitch contours that fall within the bounds of one's native language. More broadly, enhancement of native pitch stimuli and stronger rightward asymmetry of CPR components in the Chinese group is consistent with the notion that long-term experience shapes adaptive, distributed hierarchical pitch processing in the auditory cortex, and reflects an interaction with higher order, extrasensory processes beyond the sensory memory trace. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  5. Cigarette smoking as a risk factor for auditory problems.

    Science.gov (United States)

    Paschoal, Carolina Pamplona; Azevedo, Marisa Frasson de

    2009-01-01

    Smoking is a public health concern and we are still unsure of its relation with auditory problems. To study the effects of cigarette smoking in auditory thresholds, in otoacoustic emissions and in their inhibition by the efferent olivocochlear medial system. 144 adults from both genders, between 20 and 31 years of age, smoking and non-smoking individuals were submitted to conventional and high-frequency audiometry, transient stimuli otoacoustic emissions and suppression effect investigation. smokers presented worse auditory thresholds in the frequencies of 12.500Hz in the right ear and 14,000 kHz in both ears. Regarding the otoacoustic emissions, smokers group presented a lower response level in the frequencies of 1,000Hz in both ears and 4,000Hz in the left ear. Among smokers there were more cases of cochlear dysfunction and tinnitus. Our results suggest that cigarette smoking has an adverse effect on the auditory system.

  6. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    enhanced relative to the non-musicians for both resolved and unresolved harmonics in the right auditory cortex, right frontal regions and inferior colliculus. However, the increase in neural activation in the right auditory cortex of musicians was predictive of the increased pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... of training, which seemed to be specific to the stimuli containing resolved harmonics. Finally, a functional magnetic resonance imaging paradigm was used to examine the response of the auditory cortex to resolved and unresolved harmonics in musicians and non-musicians. The neural responses in musicians were...

  7. Hemispheric asymmetries for visual and auditory temporal processing: an evoked potential study.

    Science.gov (United States)

    Nicholls, Michael E R; Gora, John; Stough, Con K K

    2002-04-01

    Lateralization for temporal processing was investigated using evoked potentials to an auditory and visual gap detection task in 12 dextral adults. The auditory stimuli consisted of 300-ms bursts of white noise, half of which contained an interruption lasting 4 or 6 ms. The visual stimuli consisted of 130-ms flashes of light, half of which contained a gap lasting 6 or 8 ms. The stimuli were presented bilaterally to both ears or both visual fields. Participants made a forced two-choice discrimination using a bimanual response. Manipulations of the task had no effect on the early evoked components. However, an effect was observed for a late positive component, which occurred approximately 300-400 ms following gap presentation. This component tended to be later and lower in amplitude for the more difficult stimulus conditions. An index of the capacity to discriminate gap from no-gap stimuli was gained by calculating the difference waveform between these conditions. The peak of the difference waveform was delayed for the short-gap stimuli relative to the long-gap stimuli, reflecting decreased levels of difficulty associated with the latter stimuli. Topographic maps of the difference waveforms revealed a prominence over the left hemisphere. The visual stimuli had an occipital parietal focus whereas the auditory stimuli were parietally centered. These results confirm the importance of the left hemisphere for temporal processing and demonstrate that it is not the result of a hemispatial attentional bias or a peripheral sensory asymmetry.

  8. BAER - brainstem auditory evoked response

    Science.gov (United States)

    ... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.

  9. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  10. TypingSuite: Integrated Software for Presenting Stimuli, and Collecting and Analyzing Typing Data

    Science.gov (United States)

    Mazerolle, Erin L.; Marchand, Yannick

    2015-01-01

    Research into typing patterns has broad applications in both psycholinguistics and biometrics (i.e., improving security of computer access via each user's unique typing patterns). We present a new software package, TypingSuite, which can be used for presenting visual and auditory stimuli, collecting typing data, and summarizing and analyzing the…

  11. Catalysis with hierarchical zeolites

    DEFF Research Database (Denmark)

    Holm, Martin Spangsberg; Taarning, Esben; Egeblad, Kresten

    2011-01-01

    topic. Until now, the main reason for developing hierarchical zeolites has been to achieve heterogeneous catalysts with improved performance but this particular facet has not yet been reviewed in detail. Thus, the present paper summaries and categorizes the catalytic studies utilizing hierarchical...

  12. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition. © 2012 New York Academy of Sciences.

  13. [Auditory processing in specific language disorder].

    Science.gov (United States)

    Idiazábal-Aletxa, M A; Saperas-Rodríguez, M

    2008-01-01

    Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.

  14. Recency and suffix effects with immediate recall of olfactory stimuli.

    Science.gov (United States)

    Miles, C; Jenkins, R

    2000-05-01

    In contrast to our understanding of the immediate recall of auditory and visual material, little is known about the corresponding characteristics of short-term olfactory memory. The current study investigated the pattern of immediate serial recall and the associated suffix effect using olfactory stimuli. Subjects were trained initially to identify and name correctly nine different odours. Experiment 1 established an immediate correct recall span of approximately six items. In Experiment 2 participants recalled serially span equivalent lists which were followed by a visual, auditory, or olfactory suffix. Primacy was evident in the recall curves for all three suffix conditions. Recency, in contrast, was evident in the auditory and visual suffix conditions only; there was a strong suffix effect in the olfactory suffix condition. Experiment 3 replicated this pattern of effects using seven-item lists, and demonstrated that the magnitude of the recency and suffix effects obtained in the olfactory modality can equate to that obtained in the auditory modality. It is concluded that the pattern of recency and suffix effects in the olfactory modality is reliable, and poses difficulties for those theories that rely on the presence of a primary linguistic code, sound, or changing state as determinants of these effects in serial recall.

  15. A hardware model of the auditory periphery to transduce acoustic signals into neural activity

    Directory of Open Access Journals (Sweden)

    Takashi eTateno

    2013-11-01

    Full Text Available To improve the performance of cochlear implants, we have integrated a microdevice into a model of the auditory periphery with the goal of creating a microprocessor. We constructed an artificial peripheral auditory system using a hybrid model in which polyvinylidene difluoride was used as a piezoelectric sensor to convert mechanical stimuli into electric signals. To produce frequency selectivity, the slit on a stainless steel base plate was designed such that the local resonance frequency of the membrane over the slit reflected the transfer function. In the acoustic sensor, electric signals were generated based on the piezoelectric effect from local stress in the membrane. The electrodes on the resonating plate produced relatively large electric output signals. The signals were fed into a computer model that mimicked some functions of inner hair cells, inner hair cell–auditory nerve synapses, and auditory nerve fibers. In general, the responses of the model to pure-tone burst and complex stimuli accurately represented the discharge rates of high-spontaneous-rate auditory nerve fibers across a range of frequencies greater than 1 kHz and middle to high sound pressure levels. Thus, the model provides a tool to understand information processing in the peripheral auditory system and a basic design for connecting artificial acoustic sensors to the peripheral auditory nervous system. Finally, we discuss the need for stimulus control with an appropriate model of the auditory periphery based on auditory brainstem responses that were electrically evoked by different temporal pulse patterns with the same pulse number.

  16. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  17. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Stochastic undersampling steepens auditory threshold/duration functions: Implications for understanding auditory deafferentation and aging

    Directory of Open Access Journals (Sweden)

    Frederic eMarmel

    2015-05-01

    Full Text Available It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013 to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds ( 50 ms did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that that the auditory system of audiometrically normal older listeners might not be ‘slower than normal’, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.

  19. Interactions between stimulus-specific adaptation and visual auditory integration in the forebrain of the barn owl.

    Science.gov (United States)

    Reches, Amit; Netser, Shai; Gutfreund, Yoram

    2010-05-19

    Neural adaptation and visual auditory integration are two well studied and common phenomena in the brain, yet little is known about the interaction between them. In the present study, we investigated a visual forebrain area in barn owls, the entopallium (E), which has been shown recently to encompass auditory responses as well. Responses of neurons to sequences of visual, auditory, and bimodal (visual and auditory together) events were analyzed. Sequences comprised two stimuli, one with a low probability of occurrence and the other with a high probability. Neurons in the E tended to respond more strongly to low probability visual stimuli than to high probability stimuli. Such a phenomenon is known as stimulus-specific adaptation (SSA) and is considered to be a neural correlate of change detection. Responses to the corresponding auditory sequences did not reveal an equivalent tendency. Interestingly, however, SSA to bimodal events was stronger than to visual events alone. This enhancement was apparent when the visual and auditory stimuli were presented from matching locations in space (congruent) but not when the bimodal stimuli were spatially incongruent. These findings suggest that the ongoing task of detecting unexpected events can benefit from the integration of visual and auditory information.

  20. Single-unit Analysis of Somatosensory Processing in Core Auditory Cortex of Hearing Ferrets

    Science.gov (United States)

    Meredith, M. Alex; Allman, Brian L.

    2014-01-01

    The recent findings in several species that primary auditory cortex processes non-auditory information have largely overlooked the possibility for somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-- A1, fields) for tactile responsivity. Multiple single-unit recordings from anesthetized ferret cortex yielded histologically verified neurons (n=311) tested with electronically controlled auditory, visual and tactile stimuli and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. PMID:25728185

  1. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  3. Evaluating the influence of the 'unity assumption' on the temporal perception of realistic audiovisual stimuli.

    Science.gov (United States)

    Vatakis, Argiro; Spence, Charles

    2008-01-01

    Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.

  4. Parallel Temporal Dynamics in Hierarchical Cognitive Control

    Science.gov (United States)

    Ranti, Carolyn; Chatham, Christopher H.; Badre, David

    2015-01-01

    Cognitive control allows us to follow abstract rules in order to choose appropriate responses given our desired outcomes. Cognitive control is often conceptualized as a hierarchical decision process, wherein decisions made at higher, more abstract levels of control asymmetrically influence lower-level decisions. These influences could evolve sequentially across multiple levels of a hierarchical decision, consistent with much prior evidence for central bottlenecks and seriality in decision-making processes. However, here, we show that multiple levels of hierarchical cognitive control are processed primarily in parallel. Human participants selected responses to stimuli using a complex, multiply contingent (third order) rule structure. A response deadline procedure allowed assessment of the accuracy and timing of decisions made at each level of the hierarchy. In contrast to a serial decision process, error rates across levels of the decision mostly declined simultaneously and at identical rates, with only a slight tendency to complete the highest level decision first. Simulations with a biologically plausible neural network model demonstrate how such parallel processing could emerge from a previously developed hierarchically nested frontostriatal architecture. Our results support a parallel processing model of cognitive control, in which uncertainty on multiple levels of a decision is reduced simultaneously. PMID:26051820

  5. Electrophysiological correlates of predictive coding of auditory location in the perception of natural audiovisual events

    Directory of Open Access Journals (Sweden)

    Jeroen eStekelenburg

    2012-05-01

    Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.

  6. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations

    Directory of Open Access Journals (Sweden)

    Gabriella Musacchia

    2017-08-01

    Full Text Available Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx, over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx or maturation alone (Naïve Control, NC. Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD elicited greater Theta-band (4–6 Hz activity in Right Auditory Cortex (RAC, as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV elicited larger responses in Left Auditory Cortex (LAC. PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33–37 Hz activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping.

  7. Magnitude judgments of loudness change for discrete, dynamic, and hybrid stimuli.

    Science.gov (United States)

    Pastore, Richard E; Flint, Jesse

    2011-04-01

    Recent investigations of loudness change within stimuli have identified differences as a function of direction of change and power range (e.g., Canévet, Acustica, 62, 2136-2142, 1986; Neuhoff, Nature, 395, 123-124, 1998), with claims of differences between dynamic and static stimuli. Experiment 1 provides the needed direct empirical evaluation of loudness change across static, dynamic, and hybrid stimuli. Consistent with recent findings for dynamic stimuli, quantitative and qualitative differences in pattern of loudness change were found as a function of power change direction. With identical patterns of loudness change, only quantitative differences were found across stimulus type. In Experiment 2, Points of Subjective loudness Equality (PSE) provided additional information about loudness judgments for the static and dynamic stimuli. Because the quantitative differences across stimulus type exceed the magnitude that could be expected based upon temporal integration by the auditory system, other factors need to be, and are, considered.

  8. Propofol disrupts functional interactions between sensory and high-order processing of auditory verbal memory.

    Science.gov (United States)

    Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G

    2012-10-01

    Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.

  9. Cross-Modal Functional Reorganization of Visual and Auditory Cortex in Adult Cochlear Implant Users Identified with fNIRS

    Directory of Open Access Journals (Sweden)

    Ling-Chia Chen

    2016-01-01

    Full Text Available Cochlear implant (CI users show higher auditory-evoked activations in visual cortex and higher visual-evoked activation in auditory cortex compared to normal hearing (NH controls, reflecting functional reorganization of both visual and auditory modalities. Visual-evoked activation in auditory cortex is a maladaptive functional reorganization whereas auditory-evoked activation in visual cortex is beneficial for speech recognition in CI users. We investigated their joint influence on CI users’ speech recognition, by testing 20 postlingually deafened CI users and 20 NH controls with functional near-infrared spectroscopy (fNIRS. Optodes were placed over occipital and temporal areas to measure visual and auditory responses when presenting visual checkerboard and auditory word stimuli. Higher cross-modal activations were confirmed in both auditory and visual cortex for CI users compared to NH controls, demonstrating that functional reorganization of both auditory and visual cortex can be identified with fNIRS. Additionally, the combined reorganization of auditory and visual cortex was found to be associated with speech recognition performance. Speech performance was good as long as the beneficial auditory-evoked activation in visual cortex was higher than the visual-evoked activation in the auditory cortex. These results indicate the importance of considering cross-modal activations in both visual and auditory cortex for potential clinical outcome estimation.

  10. Auditory and multisensory responses in the tectofugal pathway of the barn owl.

    Science.gov (United States)

    Reches, Amit; Gutfreund, Yoram

    2009-07-29

    A common visual pathway in all amniotes is the tectofugal pathway connecting the optic tectum with the forebrain. The tectofugal pathway has been suggested to be involved in tasks such as orienting and attention, tasks that may benefit from integrating information across senses. Nevertheless, previous research has characterized the tectofugal pathway as strictly visual. Here we recorded from two stations along the tectofugal pathway of the barn owl: the thalamic nucleus rotundus (nRt) and the forebrain entopallium (E). We report that neurons in E and nRt respond to auditory stimuli as well as to visual stimuli. Visual tuning to the horizontal position of the stimulus and auditory tuning to the corresponding spatial cue (interaural time difference) were generally broad, covering a large portion of the contralateral space. Responses to spatiotemporally coinciding multisensory stimuli were mostly enhanced above the responses to the single modality stimuli, whereas spatially misaligned stimuli were not. Results from inactivation experiments suggest that the auditory responses in E are of tectal origin. These findings support the notion that the tectofugal pathway is involved in multisensory processing. In addition, the findings suggest that the ascending auditory information to the forebrain is not as bottlenecked through the auditory thalamus as previously thought.

  11. Auditory Modulation of Somatosensory Spatial Judgments in Various Body Regions and Locations

    Directory of Open Access Journals (Sweden)

    Yukiomi Nozoe

    2011-10-01

    Full Text Available The spatial modulation effect has been reported in somatosensory spatial judgments when the task-irrelevant auditory stimuli are given from the opposite direction. Two experiments examined how the spatial modulation effect on somatosensory spatial judgments is altered in various body regions and their spatial locations. In experiment 1, air-puffs were presented randomly to either the left or right cheeks, hands (palm versus back and knees while auditory stimuli were presented from just behind ear on either the same or opposite side. In experiment 2, air-puffs were presented to hands which were aside of cheeks or placed on the knees. The participants were instructed to make speeded discrimination responses regarding the side (left versus right of the somatosensory targets by using two footpedals. In all conditions, reaction times significantry increased when the irrelevant stimuli were presented from the opposite side rather than from the same side. We found that the back of the hands were more influenced by incongruent auditory stimuli than cheeks, knees and palms, and that the hands were more influenced by incongruent auditory stimuli when placed at the side of cheeks than on the knees. These results indicate that the auditory-somatosensory interaction differs in various body regions and their spatial locations.

  12. Brain metabolism during hallucination-like auditory stimulation in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Guillermo Horga

    Full Text Available Auditory verbal hallucinations (AVH in schizophrenia are typically characterized by rich emotional content. Despite the prominent role of emotion in regulating normal perception, the neural interface between emotion-processing regions such as the amygdala and auditory regions involved in perception remains relatively unexplored in AVH. Here, we studied brain metabolism using FDG-PET in 9 remitted patients with schizophrenia that previously reported severe AVH during an acute psychotic episode and 8 matched healthy controls. Participants were scanned twice: (1 at rest and (2 during the perception of aversive auditory stimuli mimicking the content of AVH. Compared to controls, remitted patients showed an exaggerated response to the AVH-like stimuli in limbic and paralimbic regions, including the left amygdala. Furthermore, patients displayed abnormally strong connections between the amygdala and auditory regions of the cortex and thalamus, along with abnormally weak connections between the amygdala and medial prefrontal cortex. These results suggest that abnormal modulation of the auditory cortex by limbic-thalamic structures might be involved in the pathophysiology of AVH and may potentially account for the emotional features that characterize hallucinatory percepts in schizophrenia.

  13. The impact of maternal smoking on fast auditory brainstem responses.

    Science.gov (United States)

    Kable, Julie A; Coles, Claire D; Lynch, Mary Ellen; Carroll, Julie

    2009-01-01

    Deficits in auditory processing have been posited as one of the underlying neurodevelopmental consequences of maternal smoking during pregnancy that leads to later language and reading deficits. Fast auditory brainstem responses were used to assess differences in the sensory processing of auditory stimuli among infants with varying degrees of prenatal cigarette exposure. Maternal report of consumption of cigarettes and blood samples were collected in the hospital to assess exposure levels and participants were then seen at 6-months. To participate in the study, all infants had to pass the newborn hearing exam or a clinically administered ABR and have no known health problems. After controlling for participant age, maternal smoking during pregnancy was negatively related to latency of auditory brainstem responses. Of several potential covariates, only perinatal complications and maternal alcohol use were also related to latency of the ABR responses and maternal smoking level accounted for significant unique variance after controlling for these factors. These results suggest that the relationship between maternal smoking may lead to disruption in the sensory encoding of auditory stimuli.

  14. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  15. Biases in Visual, Auditory, and Audiovisual Perception of Space.

    Directory of Open Access Journals (Sweden)

    Brian Odegaard

    2015-12-01

    Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only

  16. A novel hybrid auditory BCI paradigm combining ASSR and P300.

    Science.gov (United States)

    Kaongoen, Netiwit; Jo, Sungho

    2017-03-01

    Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Temporal Integration of Auditory Stimulation and Binocular Disparity Signals

    Directory of Open Access Journals (Sweden)

    Marina Zannoli

    2011-10-01

    Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.

  18. Introduction into Hierarchical Matrices

    KAUST Repository

    Litvinenko, Alexander

    2013-12-05

    Hierarchical matrices allow us to reduce computational storage and cost from cubic to almost linear. This technique can be applied for solving PDEs, integral equations, matrix equations and approximation of large covariance and precision matrices.

  19. Programming with Hierarchical Maps

    DEFF Research Database (Denmark)

    Ørbæk, Peter

    This report desribes the hierarchical maps used as a central data structure in the Corundum framework. We describe its most prominent features, ague for its usefulness and briefly describe some of the software prototypes implemented using the technology....

  20. Micromechanics of hierarchical materials

    DEFF Research Database (Denmark)

    Mishnaevsky, Leon, Jr.

    2012-01-01

    A short overview of micromechanical models of hierarchical materials (hybrid composites, biomaterials, fractal materials, etc.) is given. Several examples of the modeling of strength and damage in hierarchical materials are summarized, among them, 3D FE model of hybrid composites...... with nanoengineered matrix, fiber bundle model of UD composites with hierarchically clustered fibers and 3D multilevel model of wood considered as a gradient, cellular material with layered composite cell walls. The main areas of research in micromechanics of hierarchical materials are identified, among them......, the investigations of the effects of load redistribution between reinforcing elements at different scale levels, of the possibilities to control different material properties and to ensure synergy of strengthening effects at different scale levels and using the nanoreinforcement effects. The main future directions...

  1. Sparse representation of sounds in the unanesthetized auditory cortex.

    Directory of Open Access Journals (Sweden)

    Tomás Hromádka

    2008-01-01

    Full Text Available How do neuronal populations in the auditory cortex represent acoustic stimuli? Although sound-evoked neural responses in the anesthetized auditory cortex are mainly transient, recent experiments in the unanesthetized preparation have emphasized subpopulations with other response properties. To quantify the relative contributions of these different subpopulations in the awake preparation, we have estimated the representation of sounds across the neuronal population using a representative ensemble of stimuli. We used cell-attached recording with a glass electrode, a method for which single-unit isolation does not depend on neuronal activity, to quantify the fraction of neurons engaged by acoustic stimuli (tones, frequency modulated sweeps, white-noise bursts, and natural stimuli in the primary auditory cortex of awake head-fixed rats. We find that the population response is sparse, with stimuli typically eliciting high firing rates (>20 spikes/second in less than 5% of neurons at any instant. Some neurons had very low spontaneous firing rates (<0.01 spikes/second. At the other extreme, some neurons had driven rates in excess of 50 spikes/second. Interestingly, the overall population response was well described by a lognormal distribution, rather than the exponential distribution that is often reported. Our results represent, to our knowledge, the first quantitative evidence for sparse representations of sounds in the unanesthetized auditory cortex. Our results are compatible with a model in which most neurons are silent much of the time, and in which representations are composed of small dynamic subsets of highly active neurons.

  2. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  3. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  4. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  5. The power of auditory-motor synchronization in sports: Enhancing running performance by coupling cadence with the right beats

    NARCIS (Netherlands)

    Bood, R.J.; Nijssen, M; van der Kamp, J.; Roerdink, M.

    2013-01-01

    Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our

  6. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI.

    Science.gov (United States)

    Zhou, Sijie; Allison, Brendan Z; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill) using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other) BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  7. Effects of background music on objective and subjective performance measures in an auditory BCI

    Directory of Open Access Journals (Sweden)

    Sijie Zhou

    2016-10-01

    Full Text Available Several studies have explored brain computer interface (BCI systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory and other BCIs are typically studied without background music. Some work has explored the possibility of using polyphonic music in auditory BCI systems. However, this approach requires users with good musical skills, and has not been explored in online experiments. Our hypothesis was that an auditory BCI with background music would be preferred by subjects over a similar BCI without background music, without any difference in BCI performance. We introduce a simple paradigm (which does not require musical skill using percussion instrument sound stimuli and background music, and evaluated it in both offline and online experiments. The result showed that subjects preferred the auditory BCI with background music. Different performance measures did not reveal any significant performance effect when comparing background music vs. no background. Since the addition of background music does not impair BCI performance but is preferred by users, auditory (and perhaps other BCIs should consider including it. Our study also indicates that auditory BCIs can be effective even if the auditory channel is simultaneously otherwise engaged.

  8. Using visual stimuli to enhance gait control.

    Science.gov (United States)

    Rhea, Christopher K; Kuznetsov, Nikita A

    2017-01-01

    Gait control challenges commonly coincide with vestibular dysfunction and there is a long history in using balance and gait activities to enhance functional mobility in this population. While much has been learned using traditional rehabilitation exercises, there is a new line of research emerging that is using visual stimuli in a very specific way to enhance gait control. For example, avatars can be created in an individualized manner to incorporate specific gait characteristics. The avatar could then be used as a visual stimulus to which the patient can synchronize their own gait cycle. This line of research builds upon the rich history of sensorimotor control research in which augmented sensory information (visual, haptic, or auditory) is used to probe, and even enhance, human motor control. This review paper focuses on gait control challenges in patients with vestibular dysfunction, provides a brief historical perspective on how various visual displays have been used to probe sensorimotor and gait control, and offers some recommendations for future research.

  9. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  10. Tuning in to the voices: a multisite FMRI study of auditory hallucinations.

    Science.gov (United States)

    Ford, Judith M; Roach, Brian J; Jorgensen, Kasper W; Turner, Jessica A; Brown, Gregory G; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S; Lim, Kelvin O; Glover, Gary; Potkin, Steven G; Mathalon, Daniel H

    2009-01-01

    Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically "tuned" to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Although "voices" are the anticipated sensory experience, it appears that even primary auditory cortex is "turned on" and "tuned in" to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample.

  11. Entrainment to an auditory signal: Is attention involved?

    Science.gov (United States)

    Kunert, Richard; Jongman, Suzanne R

    2017-01-01

    Many natural auditory signals, including music and language, change periodically. The effect of such auditory rhythms on the brain is unclear however. One widely held view, dynamic attending theory, proposes that the attentional system entrains to the rhythm and increases attention at moments of rhythmic salience. In support, 2 experiments reported here show reduced response times to visual letter strings shown at auditory rhythm peaks, compared with rhythm troughs. However, we argue that an account invoking the entrainment of general attention should further predict rhythm entrainment to also influence memory for visual stimuli. In 2 pseudoword memory experiments we find evidence against this prediction. Whether a pseudoword is shown during an auditory rhythm peak or not is irrelevant for its later recognition memory in silence. Other attention manipulations, dividing attention and focusing attention, did result in a memory effect. This raises doubts about the suggested attentional nature of rhythm entrainment. We interpret our findings as support for auditory rhythm perception being based on auditory-motor entrainment, not general attention entrainment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Emotional stimuli and motor conversion disorder

    NARCIS (Netherlands)

    Voon, V.; Brezing, C.; Gallea, C.; Ameli, R.; Roelofs, K.; LaFrance, W.C.; Hallett, M.

    2010-01-01

    Conversion disorder is characterized by neurological signs and symptoms related to an underlying psychological issue. Amygdala activity to affective stimuli is well characterized in healthy volunteers with greater amygdala activity to both negative and positive stimuli relative to neutral stimuli,

  13. Aktiverende Undervisning i auditorier

    DEFF Research Database (Denmark)

    Parus, Judith

    Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....

  14. Moving Stimuli Facilitate Synchronization But Not Temporal Perception.

    Science.gov (United States)

    Silva, Susana; Castro, São Luís

    2016-01-01

    Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball) facilitates synchronization compared to a static stimulus (e.g., a flashing light), and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static) and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization), followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization). Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception). Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception). Balls outperformed flashes and matched beeps (powerful ball effect) in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided). In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps) and moving visual stimuli (bouncing balls) overlap.

  15. Moving stimuli facilitate synchronization but not temporal perception

    Directory of Open Access Journals (Sweden)

    Susana Silva

    2016-11-01

    Full Text Available Recent studies have shown that a moving visual stimulus (e.g., a bouncing ball facilitates synchronization compared to a static stimulus (e.g., a flashing light, and that it can even be as effective as an auditory beep. We asked a group of participants to perform different tasks with four stimulus types: beeps, siren-like sounds, visual flashes (static and bouncing balls. First, participants performed synchronization with isochronous sequences (stimulus-guided synchronization, followed by a continuation phase in which the stimulus was internally generated (imagery-guided synchronization. Then they performed a perception task, in which they judged whether the final part of a temporal sequence was compatible with the previous beat structure (stimulus-guided perception. Similar to synchronization, an imagery-guided variant was added, in which sequences contained a gap in between (imagery-guided perception. Balls outperformed flashes and matched beeps (powerful ball effect in stimulus-guided synchronization but not in perception (stimulus- or imagery-guided. In imagery-guided synchronization, performance accuracy decreased for beeps and balls, but not for flashes and sirens. Our findings suggest that the advantages of moving visual stimuli over static ones are grounded in action rather than perception, and they support the hypothesis that the sensorimotor coupling mechanisms for auditory (beeps and moving visual stimuli (bouncing balls overlap.

  16. Parallel hierarchical radiosity rendering

    Energy Technology Data Exchange (ETDEWEB)

    Carter, Michael [Iowa State Univ., Ames, IA (United States)

    1993-07-01

    In this dissertation, the step-by-step development of a scalable parallel hierarchical radiosity renderer is documented. First, a new look is taken at the traditional radiosity equation, and a new form is presented in which the matrix of linear system coefficients is transformed into a symmetric matrix, thereby simplifying the problem and enabling a new solution technique to be applied. Next, the state-of-the-art hierarchical radiosity methods are examined for their suitability to parallel implementation, and scalability. Significant enhancements are also discovered which both improve their theoretical foundations and improve the images they generate. The resultant hierarchical radiosity algorithm is then examined for sources of parallelism, and for an architectural mapping. Several architectural mappings are discussed. A few key algorithmic changes are suggested during the process of making the algorithm parallel. Next, the performance, efficiency, and scalability of the algorithm are analyzed. The dissertation closes with a discussion of several ideas which have the potential to further enhance the hierarchical radiosity method, or provide an entirely new forum for the application of hierarchical methods.

  17. Effects of Background Music on Objective and Subjective Performance Measures in an Auditory BCI

    OpenAIRE

    Zhou, Sijie; Allison, Brendan Z.; Kübler, Andrea; Cichocki, Andrzej; Wang, Xingyu; Jin, Jing

    2016-01-01

    Several studies have explored brain computer interface (BCI) systems based on auditory stimuli, which could help patients with visual impairments. Usability and user satisfaction are important considerations in any BCI. Although background music can influence emotion and performance in other task environments, and many users may wish to listen to music while using a BCI, auditory, and other BCIs are typically studied without background music. Some work has explored the possibility of using po...

  18. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors.

    Science.gov (United States)

    Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun

    2015-08-01

    Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. BALDEY: A database of auditory lexical decisions.

    Science.gov (United States)

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.

  20. Sensorimotor Learning Enhances Expectations During Auditory Perception.

    Science.gov (United States)

    Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara

    2015-08-01

    Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Auditory Discrimination Learning: Role of Working Memory.

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    Full Text Available Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM. First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  2. Hierarchical Network Design

    DEFF Research Database (Denmark)

    Thomadsen, Tommy

    2005-01-01

    design. The papers have all been submitted for journals, and except for two papers, are awaiting review. The papers are mostly concerned with optimal methods and, in a few cases, heuristics for designing hierarchical and ring networks. All papers develop bounds which are used in the optimal methods...... danne grundlag for et studie af design af hierarkiske netværk. Afhandlings vigtigste bidrag best ar af syv artikler, der er inkluderet i appendiks. Artiklerne handler om design af hierarkisk netværk og ring netværk. Artiklerne er alle indsendt til videnskablige journaler og afventer bedømmelse, bortset......Communication networks are immensely important today, since both companies and individuals use numerous services that rely on them. This thesis considers the design of hierarchical (communication) networks. Hierarchical networks consist of layers of networks and are well-suited for coping...

  3. Working Memory Deficits in Dynamic Sport Athletes with a History of Concussion Revealed by A Visual-Auditory Dual-Task Paradigm

    National Research Council Canada - National Science Library

    Tapper, Anthony; Niechwiej-Szwedo, Ewa; Gonzalez, David; Roy, Eric

    2015-01-01

    .... It is important to understand information processing capacity in dynamic sports because athletes must divide their attention between visual and auditory stimuli and hold that information in memory to guide actions...

  4. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  5. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  6. Processamento auditivo em idosos: estudo da interação por meio de testes com estímulos verbais e não-verbais Auditory processing in elderly people: interaction study by means of verbal and nonverbal stimuli

    Directory of Open Access Journals (Sweden)

    Maria Madalena Canina Pinheiro

    2004-04-01

    Full Text Available Em função do processo de envelhecimento, surge a perda auditiva, conhecida como presbiacusia que, além da perda auditiva, é acompanhada por um declínio do funcionamento auditivo. OBJETIVO: caracterizar o aspecto da interação de sons verbais e não-verbais em idosos com e sem perda auditiva por meio dos testes de Localização Sonora em Cinco Direções, Fusão Binaural e do Teste Pediátrico de Inteligibilidade de Fala em escuta Monótica (Pediatric Sentence Identification - PSI-MCI, levando em conta cada procedimento e o grau da perda auditiva. FORMA DE ESTUDO: Estudo clínico com coorte transversal. MATERIAL E MÉTODO: 110 idosos, na faixa etária dos 60 a 85 anos com audição normal ou com perda auditiva neurossensorial de grau até moderadamente-severo simétrica foram incluídos neste estudo. O comportamento auditivo comum a todos os testes selecionados foi denominado de interação. A análise foi feita por procedimento isolado e pelo grau da perda auditiva. RESULTADOS: Ocorreram mais indivíduos com inabilidade no teste de Fusão Binaural. Os procedimentos que apresentaram uma dependência estatisticamente significante com o grau da perda auditiva foram o teste de Localização Sonora e PSI-MCI (-10. CONCLUSÃO: Idosos apresentam dificuldade no processo de interação binaural quando a informação auditiva não está completa. O grau da perda auditiva interferiu principalmente no comportamentos auditivo de localização.Presbyacusis is a hearing loss combined with functional auditory decline due to the aging process. AIM: The aim of this study is to characterize verbal and nonverbal sound interaction aspects in elderly individuals with and without hearing loss by means of Binaural Fusion Test, Sound Localization Test at five directions and Pediatric Sentence Identification (PSI, taking into consideration each procedure and hearing loss magnitude. STUDY DESIGN: Clinical study with transversal cohort. MATERIAL AND METHOD: A number

  7. Effects of Amplitude Compression on Relative Auditory Distance Perception

    Science.gov (United States)

    2013-10-01

    human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural ...auditory distance perception by reducing the level differences between sounds . The focus of the present study was to investigate the effect of amplitude...create stimuli. Two levels of amplitude compression were applied to the recordings through Adobe Audition sound editing software to simulate military

  8. Auditory hallucinations induced by trazodone

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  9. Auditory brainstem response recording to multiple interleaved broadband chirps.

    Science.gov (United States)

    Cebulla, Mario; Stürzebecher, Ekkehard; Don, Manuel; Müller-Mazzotta, Jochen

    2012-01-01

    The simultaneous application of multiple stimuli that excite different frequency regions of the cochlea is a well-established method for recording frequency-specific auditory steady state responses. Because the stimuli are applied at different repetition rates, they actually do not appear exactly simultaneously. There is always a certain time difference between the multiple frequency-specific stimuli. This is true also for multiple interleaved broadband stimuli. Therefore, because of this time difference, one may expect a successful recording of responses to multiple broadband chirp stimuli even when such stimuli activate the whole cochlear partition. This article describes a technique for recording auditory brainstem responses evoked by trains of broadband chirps presented simultaneously at equal stimulus levels but at different repetition rates. The interactions between the interleaved stimulus trains were studied to lay the foundation for a rapid method of assessing temporal aspects of peripheral auditory processing. The first step in laying this foundation is to determine the characteristics of responses from an intact and normal-hearing system to these interleaved chirp trains. Subsequently, the studied interactions between the interleaved applied stimuli may provide a referential framework for future clinical studies aimed at assessing pathological populations. Two chirp trains were applied concurrently at the same stimulus level but at different repetition rates of 20/sec and 22/sec, respectively. Two overall stimulus levels were investigated: 50 and 30 dB nHL. Because of the 2 Hz difference between the repetition rates, the time difference between the stimuli of the two stimulus trains followed a periodic cycling. The cycling period of 0.5 sec contained ten 20/sec stimuli and eleven 22/sec-stimuli. The response to a single train of chirps with the repetition rate of 20/sec was also recorded. The test group consisted of 11 young adult subjects, all with

  10. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  11. Multisensory Processing of Gustatory Stimuli.

    Science.gov (United States)

    Simon, S A; de Araujo, I E; Stapleton, J R; Nicolelis, M A L

    2008-06-01

    Gustatory perception is inherently multimodal, since approximately the same time that intra-oral stimuli activate taste receptors, somatosensory information is concurrently sent to the CNS. We review evidence that gustatory perception is intrinsically linked to concurrent somatosensory processing. We will show that processing of multisensory information can occur at the level of the taste cells through to the gustatory cortex. We will also focus on the fact that the same chemical and physical stimuli that activate the taste system also activate the somatosensory system (SS), but they may provide different types of information to guide behavior.

  12. Auditory short-term memory behaves like visual short-term memory.

    Directory of Open Access Journals (Sweden)

    Kristina M Visscher

    2007-03-01

    Full Text Available Are the information processing steps that support short-term sensory memory common to all the senses? Systematic, psychophysical comparison requires identical experimental paradigms and comparable stimuli, which can be challenging to obtain across modalities. Participants performed a recognition memory task with auditory and visual stimuli that were comparable in complexity and in their neural representations at early stages of cortical processing. The visual stimuli were static and moving Gaussian-windowed, oriented, sinusoidal gratings (Gabor patches; the auditory stimuli were broadband sounds whose frequency content varied sinusoidally over time (moving ripples. Parallel effects on recognition memory were seen for number of items to be remembered, retention interval, and serial position. Further, regardless of modality, predicting an item's recognizability requires taking account of (1 the probe's similarity to the remembered list items (summed similarity, and (2 the similarity between the items in memory (inter-item homogeneity. A model incorporating both these factors gives a good fit to recognition memory data for auditory as well as visual stimuli. In addition, we present the first demonstration of the orthogonality of summed similarity and inter-item homogeneity effects. These data imply that auditory and visual representations undergo very similar transformations while they are encoded and retrieved from memory.

  13. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training

    Science.gov (United States)

    Phan, Mimi L.; Vicario, David S.

    2014-01-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. PMID:25475353

  14. Modulation of auditory attention by training: evidence from dichotic listening.

    Science.gov (United States)

    Soveri, Anna; Tallus, Jussi; Laine, Matti; Nyberg, Lars; Bäckman, Lars; Hugdahl, Kenneth; Tuomainen, Jyrki; Westerhausen, René; Hämäläinen, Heikki

    2013-01-01

    We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the left-ear stimuli through instruction), or their combination. The results showed significant training-related effects for top-down training. These effects were evident as higher overall accuracy rates in the forced-left dichotic listening (DL) condition that sets demands on attentional control, as well as a response shift toward left-sided reports in the standard DL task. Moreover, a transfer effect was observed in an untrained auditory-spatial attention task involving bilateral stimulation where top-down training led to a relatively stronger focus on left-sided stimuli. Our results indicate that training of attentional control can modulate the allocation of attention in the auditory space in adults. Malleability of auditory attention in healthy adults raises the issue of potential training gains in individuals with attentional deficits.

  15. The Influence of Auditory Information on Visual Size Adaptation

    Directory of Open Access Journals (Sweden)

    Alessia Tonelli

    2017-10-01

    Full Text Available Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence and temporal contextual cues (e.g., adaptation to steady visual stimulation. Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013 in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.

  16. Auditory event related potentials in children with peripheral hearing loss.

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoît; Lassonde, Maryse

    2013-07-01

    The aim of the study was to investigate the neurophysiological responses in children with hearing loss. Cortical auditory evoked potentials and Mismatch Negativity (MMN) Responses were recorded in 40 children, 9-12 years old: 12 with hearing loss, 12 with central auditory processing disorder (CAPD) and 16 with normal hearing. Passive oddball paradigms were used with nonverbal and verbal stimuli. For P1, no significant group differences were observed. A significant reduction in N2 amplitude with all stimuli was observed in the group of children with hearing loss compared to the results of those with normal hearing. N2 results did not reveal any significant differences between the group of children with hearing loss and the children with CAPD. MMN amplitude indicated a trend toward larger MMN amplitude among the group of children with hearing loss compared to the value of those of children with CAPD. Abnormal N2 characteristics could be a manifestation of a specific signature in children with hearing loss. This cortical response could be considered as a neurophysiologic marker of central auditory processing deficits in these children. Results suggest maturational delays and/or deficits in central auditory processing in children with hearing loss. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  17. Effect of handedness on auditory attentional performance in ADHD students

    Directory of Open Access Journals (Sweden)

    Schmidt SL

    2017-12-01

    Full Text Available Sergio L Schmidt,1,2 Ana Lucia Novais Carvaho,3 Eunice N Simoes2 1Department of Neurophysiology, State University of Rio de Janeiro, Rio de Janeiro, 2Neurology Department, Federal University of the State of Rio de Janeiro, Rio de Janeiro, 3Department of Psychology, Fluminense Federal University, Niteroi, Brazil Abstract: The relationship between handedness and attentional performance is poorly understood. Continuous performance tests (CPTs using visual stimuli are commonly used to assess subjects suffering from attention deficit hyperactivity disorder (ADHD. However, auditory CPTs are considered more useful than visual ones to evaluate classroom attentional problems. A previous study reported that there was a significant effect of handedness on students’ performance on a visual CPT. Here, we examined whether handedness would also affect CPT performance using only auditory stimuli. From an initial sample of 337 students, 11 matched pairs were selected. Repeated ANOVAs showed a significant effect of handedness on attentional performance that was exhibited even in the control group. Left-handers made more commission errors than right-handers. The results were interpreted considering that the association between ADHD and handedness reflects that consistent left-handers are less lateralized and have decreased interhemispheric connections. Auditory attentional data suggest that left-handers have problems in the impulsive/hyperactivity domain. In ADHD, clinical therapeutics and rehabilitation must take handedness into account because consistent sinistrals are more impulsive than dextrals. Keywords: attention, ADHD, consistent left-handers, auditory attention, continuous performance test

  18. Application of Neural Network Modeling to Identify Auditory Processing Disorders in School-Age Children

    Directory of Open Access Journals (Sweden)

    Sridhar Krishnamurti

    2015-01-01

    Full Text Available P300 Auditory Event-Related Potentials (P3AERPs were recorded in nine school-age children with auditory processing disorders and nine age- and gender-matched controls in response to tone burst stimuli presented at varying rates (1/second or 3/second under varying levels of competing noise (0 dB, 40 dB, or 60 dB SPL. Neural network modeling results indicated that speed of information processing and task-related demands significantly influenced P3AERP latency in children with auditory processing disorders. Competing noise and rapid stimulus rates influenced P3AERP amplitude in both groups.

  19. Looking and listening: A comparison of intertrial repetition effects in visual and auditory search tasks.

    Science.gov (United States)

    Klein, Michael D; Stolz, Jennifer A

    2015-08-01

    Previous research shows that performance on pop-out search tasks is facilitated when the target and distractors repeat across trials compared to when they switch. This phenomenon has been shown for many different types of visual stimuli. We tested whether the effect would extend beyond visual stimuli to the auditory modality. Using a temporal search task that has previously been shown to elicit priming of pop-out with visual stimuli (Yashar & Lamy, Psychological Science, 21(2), 243-251, 2010), we showed that priming of pop-out does occur with auditory stimuli and has characteristics similar to those of an analogous visual task. These results suggest that either the same or similar mechanisms might underlie priming of pop-out in both modalities.

  20. Finding the missing stimulus mismatch negativity (MMN): Emitted MMN to violations of an auditory gestalt

    Science.gov (United States)

    Salisbury, Dean F

    2011-01-01

    Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counter-intuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of 6 pips (50 msec duration, 330 msec stimulus onset asynchrony (SOA), 400 trials), were presented with an inter-trial interval (ITI) of 750 msec while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (pgestalt grouping rule. Homogenous stimulus streams appear to differ in the relative weighting of omissions than strongly patterned streams. PMID:22221004

  1. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Science.gov (United States)

    Afra, Pegah; Funke, Michael; Matsuo, Fumisuke

    2009-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms. PMID:22110319

  2. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  3. Hierarchical Porous Structures

    Energy Technology Data Exchange (ETDEWEB)

    Grote, Christopher John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-07

    Materials Design is often at the forefront of technological innovation. While there has always been a push to generate increasingly low density materials, such as aero or hydrogels, more recently the idea of bicontinuous structures has gone more into play. This review will cover some of the methods and applications for generating both porous, and hierarchically porous structures.

  4. Microparticles with hierarchical porosity

    Science.gov (United States)

    Petsev, Dimiter N; Atanassov, Plamen; Pylypenko, Svitlana; Carroll, Nick; Olson, Tim

    2012-12-18

    The present disclosure provides oxide microparticles with engineered hierarchical porosity and methods of manufacturing the same. Also described are structures that are formed by templating, impregnating, and/or precipitating the oxide microparticles and method for forming the same. Suitable applications include catalysts, electrocatalysts, electrocatalysts support materials, capacitors, drug delivery systems, sensors and chromatography.

  5. Developmental evaluation of atypical auditory sampling in dyslexia: Functional and structural evidence.

    Science.gov (United States)

    Lizarazu, Mikel; Lallier, Marie; Molinaro, Nicola; Bourguignon, Mathieu; Paz-Alonso, Pedro M; Lerma-Usabiaga, Garikoitz; Carreiras, Manuel

    2015-12-01

    Whether phonological deficits in developmental dyslexia are associated with impaired neural sampling of auditory information at either syllabic- or phonemic-rates is still under debate. In addition, whereas neuroanatomical alterations in auditory regions have been documented in dyslexic readers, whether and how these structural anomalies are linked to auditory sampling and reading deficits remains poorly understood. In this study, we measured auditory neural synchronization at different frequencies corresponding to relevant phonological spectral components of speech in children and adults with and without dyslexia, using magnetoencephalography. Furthermore, structural MRI was used to estimate cortical thickness of the auditory cortex of participants. Dyslexics showed atypical brain synchronization at both syllabic (slow) and phonemic (fast) rates. Interestingly, while a left hemispheric asymmetry in cortical thickness was functionally related to a stronger left hemispheric lateralization of neural synchronization to stimuli presented at the phonemic rate in skilled readers, the same anatomical index in dyslexics was related to a stronger right hemispheric dominance for neural synchronization to syllabic-rate auditory stimuli. These data suggest that the acoustic sampling deficit in development dyslexia might be linked to an atypical specialization of the auditory cortex to both low and high frequency amplitude modulations. © 2015 Wiley Periodicals, Inc.

  6. Statistical learning and auditory processing in children with music training: An ERP study.

    Science.gov (United States)

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne

    2017-07-01

    The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  7. Stimuli, Reinforcers, and Private Events

    Science.gov (United States)

    Nevin, John A.

    2008-01-01

    Radical behaviorism considers private events to be a part of ongoing observable behavior and to share the properties of public events. Although private events cannot be measured directly, their roles in overt action can be inferred from mathematical models that relate private responses to external stimuli and reinforcers according to the same…

  8. Subconscious Subliminal Stimuli And rrrsssssshhhppp!

    DEFF Research Database (Denmark)

    Lewis Brooks, Anthony

    2003-01-01

    of such issues as outlined in my opening statement. I suggest that successful design of the future will take much more into account the neural stimuli & potential subliminal synesthesia design aspects as an integrated element of the envisioned Virtual Interactive Space. Keywords: remarkable reductive retraction...

  9. Octave effect in auditory attention

    National Research Council Canada - National Science Library

    Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee

    2013-01-01

    ... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...

  10. Affective and physiological correlates of the perception of unimodal and bimodal emotional stimuli.

    Science.gov (United States)

    Rosa, Pedro J; Oliveira, Jorge; Alghazzawi, Daniyal; Fardoun, Habib; Gamito, Pedro

    2017-08-01

    Despite the multisensory nature of perception, previous research on emotions has been focused on unimodal emotional cues with visual stimuli. To the best of our knowledge, there is no evidence on the extent to which incongruent emotional cues from visual and auditory sensory channels affect pupil size. To investigate the effects of audiovisual emotional information perception on the physiological and affective response, but also to determine the impact of mismatched cues in emotional perception on these physiological indexes. Pupil size, electrodermal activity and affective subjective responses were recorded while 30 participants were exposed to visual and auditory stimuli with varied emotional content in three different experimental conditions: pictures and sounds presented alone (unimodal), emotionally matched audio-visual stimuli (bimodal congruent) and emotionally mismatched audio-visual stimuli (bimodal incongruent). The data revealed no effect of emotional incongruence on physiological and affective responses. On the other hand, pupil size covaried with skin conductance response (SCR), but the subjective experience was partially dissociated from autonomic responses. Emotional stimuli are able to trigger physiological responses regardless of valence, sensory modality or level of emotional congruence.

  11. Consistency of Border-Ownership Cells across Artificial Stimuli, Natural Stimuli, and Stimuli with Ambiguous Contours.

    Science.gov (United States)

    Hesse, Janis K; Tsao, Doris Y

    2016-11-02

    Segmentation and recognition of objects in a visual scene are two problems that are hard to solve separately from each other. When segmenting an ambiguous scene, it is helpful to already know the present objects and their shapes. However, for recognizing an object in clutter, one would like to consider its isolated segment alone to avoid confounds from features of other objects. Border-ownership cells (Zhou et al., 2000) appear to play an important role in segmentation, as they signal the side-of-figure of artificial stimuli. The present work explores the role of border-ownership cells in dorsal macaque visual areas V2 and V3 in the segmentation of natural object stimuli and locally ambiguous stimuli. We report two major results. First, compared with previous estimates, we found a smaller percentage of cells that were consistent across artificial stimuli used previously. Second, we found that the average response of those neurons that did respond consistently to the side-of-figure of artificial stimuli also consistently signaled, as a population, the side-of-figure for borders of single faces, occluding faces and, with higher latencies, even stimuli with illusory contours, such as Mooney faces and natural faces completely missing local edge information. In contrast, the local edge or the outlines of the face alone could not always evoke a significant border-ownership signal. Our results underscore that border ownership is coded by a population of cells, and indicate that these cells integrate a variety of cues, including low-level features and global object context, to compute the segmentation of the scene. To distinguish different objects in a natural scene, the brain must segment the image into regions corresponding to objects. The so-called "border-ownership" cells appear to be dedicated to this task, as they signal for a given edge on which side the object is that owns it. Here, we report that individual border-ownership cells are unreliable when tested across

  12. Specialization of Binaural Responses in Ventral Auditory Cortices

    Science.gov (United States)

    Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.

    2010-01-01

    Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610

  13. Hierarchical models and functional traits

    NARCIS (Netherlands)

    van Loon, E.E.; Shamoun-Baranes, J.; Sierdsema, H.; Bouten, W.; Cramer, W.; Badeck, F.; Krukenberg, B.; Klotz, S.; Kühn, I.; Schweiger, O.; Böhning-Gaese, K.; Schaefer, H.-C.; Kissling, D.; Brandl, R.; Brändle, M.; Fricke, R.; Leuschner, C.; Buschmann, H.; Köckermann, B.; Rose, L.

    2006-01-01

    Hierarchical models for animal abundance prediction are conceptually elegant. They are generally more parsimonous than non-hierarchical models derived from the same data, give relatively robust predictions and automatically provide consistent output at multiple (spatio-temporal) scales. Another

  14. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging

    OpenAIRE

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-01-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task i nterruptions p...

  15. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  16. Auditory Processing Testing: In the Booth versus Outside the Booth.

    Science.gov (United States)

    Lucker, Jay R

    2017-09-01

    Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done

  17. Hemispheric specialization in dogs for processing different acoustic stimuli.

    Directory of Open Access Journals (Sweden)

    Marcello Siniscalchi

    Full Text Available Considerable experimental evidence shows that functional cerebral asymmetries are widespread in animals. Activity of the right cerebral hemisphere has been associated with responses to novel stimuli and the expression of intense emotions, such as aggression, escape behaviour and fear. The left hemisphere uses learned patterns and responds to familiar stimuli. Although such lateralization has been studied mainly for visual responses, there is evidence in primates that auditory perception is lateralized and that vocal communication depends on differential processing by the hemispheres. The aim of the present work was to investigate whether dogs use different hemispheres to process different acoustic stimuli by presenting them with playbacks of a thunderstorm and their species-typical vocalizations. The results revealed that dogs usually process their species-typical vocalizations using the left hemisphere and the thunderstorm sounds using the right hemisphere. Nevertheless, conspecific vocalizations are not always processed by the left hemisphere, since the right hemisphere is used for processing vocalizations when they elicit intense emotion, including fear. These findings suggest that the specialisation of the left hemisphere for intraspecific communication is more ancient that previously thought, and so is specialisation of the right hemisphere for intense emotions.

  18. Salient stimuli in advertising: the effect of contrast interval length and type on recall.

    Science.gov (United States)

    Olsen, G Douglas

    2002-09-01

    Salient auditory stimuli (e.g., music or sound effects) are commonly used in advertising to elicit attention. However, issues related to the effectiveness of such stimuli are not well understood. This research examines the ability of a salient auditory stimulus, in the form of a contrast interval (CI), to enhance recall of message-related information. Researchers have argued that the effectiveness of the CI is a function of the temporal duration between the onset and offset of the change in the background stimulus and the nature of this stimulus. Three experiments investigate these propositions and indicate that recall is enhanced, providing the CI is 3 s or less. Information highlighted with silence is recalled better than information highlighted with music.

  19. Early auditory change detection implicitly facilitated by ignored concurrent visual change during a Braille reading task.

    Science.gov (United States)

    Aoyama, Atsushi; Haruyama, Tomohiro; Kuriki, Shinya

    2013-09-01

    Unconscious monitoring of multimodal stimulus changes enables humans to effectively sense the external environment. Such automatic change detection is thought to be reflected in auditory and visual mismatch negativity (MMN) and mismatch negativity fields (MMFs). These are event-related potentials and magnetic fields, respectively, evoked by deviant stimuli within a sequence of standard stimuli, and both are typically studied during irrelevant visual tasks that cause the stimuli to be ignored. Due to the sensitivity of MMN/MMF to potential effects of explicit attention to vision, however, it is unclear whether multisensory co-occurring changes can purely facilitate early sensory change detection reciprocally across modalities. We adopted a tactile task involving the reading of Braille patterns as a neutral ignore condition, while measuring magnetoencephalographic responses to concurrent audiovisual stimuli that were infrequently deviated either in auditory, visual, or audiovisual dimensions; 1000-Hz standard tones were switched to 1050-Hz deviant tones and/or two-by-two standard check patterns displayed on both sides of visual fields were switched to deviant reversed patterns. The check patterns were set to be faint enough so that the reversals could be easily ignored even during Braille reading. While visual MMFs were virtually undetectable even for visual and audiovisual deviants, significant auditory MMFs were observed for auditory and audiovisual deviants, originating from bilateral supratemporal auditory areas. Notably, auditory MMFs were significantly enhanced for audiovisual deviants from about 100 ms post-stimulus, as compared with the summation responses for auditory and visual deviants or for each of the unisensory deviants recorded in separate sessions. Evidenced by high tactile task performance with unawareness of visual changes, we conclude that Braille reading can successfully suppress explicit attention and that simultaneous multisensory changes can

  20. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  1. Contribution of psychoacoustics and neuroaudiology in revealing correlation of mental disorders with central auditory processing disorders

    Science.gov (United States)

    Iliadou, V; Iakovides, S

    2003-01-01

    Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Design A MEDLINE search was conducted to search for papers concerning the association between Central Auditory Processing Disorders and mental disorders. The research focused on the diagnostic methods providing the inter-connection of various mental disorders and central auditory deficits. Measurements and Main Results The medline research revealed 564 papers when using the keywords 'auditory deficits' and 'mental disorders'. 79 papers were referring specifically to Central Auditory Processing Disorders in connection with mental disorders. 175 papers were related to Schizophrenia, 126 to learning disabilities, 29 to Parkinson's disease, 88 to dyslexia and 39 to Alzheimer's disease. Assessment of the Central Auditory System is carried out through a great variety of tests that fall into two main categories: psychoacoustic and electrophysiologic testing. Different specialties are involved in the diagnosis and management of Central Auditory Processing Disorders as well as the mental disorders that may co-exist with them. As a result it is essential that they are all aware of the possibilities in diagnostic procedures. Conclusions Considerable evidence exists that mental disorders may correlate with CAPD and this correlation could be revealed through psychoacoustics and neuroaudiology. Mental disorders that relate to Central Auditory Processing Disorders are: Schizophrenia, attention deficit disorders, Alzheimer's disease, learning disabilities, dyslexia, depression, auditory

  2. Speech identification and cortical potentials in individuals with auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Vanaja CS

    2008-03-01

    Full Text Available Abstract Background Present study investigated the relationship between speech identification scores in quiet and parameters of cortical potentials (latency of P1, N1, and P2; and amplitude of N1/P2 in individuals with auditory neuropathy. Methods Ten individuals with auditory neuropathy (five males and five females and ten individuals with normal hearing in the age range of 12 to 39 yr participated in the study. Speech identification ability was assessed for bi-syllabic words and cortical potentials were recorded for click stimuli. Results Results revealed that in individuals with auditory neuropathy, speech identification scores were significantly poorer than that of individuals with normal hearing. Individuals with auditory neuropathy were further classified into two groups, Good Performers and Poor Performers based on their speech identification scores. It was observed that the mean amplitude of N1/P2 of Poor Performers was significantly lower than that of Good Performers and those with normal hearing. There was no significant effect of group on the latency of the peaks. Speech identification scores showed a good correlation with the amplitude of cortical potentials (N1/P2 complex but did not show a significant correlation with the latency of cortical potentials. Conclusion Results of the present study suggests that measuring the cortical potentials may offer a means for predicting perceptual skills in individuals with auditory neuropathy.

  3. Speech perception using combinations of auditory, visual, and tactile information.

    Science.gov (United States)

    Blamey, P J; Cowan, R S; Alcantara, J I; Whitford, L A; Clark, G M

    1989-01-01

    Four normally-hearing subjects were trained and tested with all combinations of a highly-degraded auditory input, a visual input via lipreading, and a tactile input using a multichannel electrotactile speech processor. The speech perception of the subjects was assessed with closed sets of vowels, consonants, and multisyllabic words; with open sets of words and sentences, and with speech tracking. When the visual input was added to any combination of other inputs, a significant improvement occurred for every test. Similarly, the auditory input produced a significant improvement for all tests except closed-set vowel recognition. The tactile input produced scores that were significantly greater than chance in isolation, but combined less effectively with the other modalities. The addition of the tactile input did produce significant improvements for vowel recognition in the auditory-tactile condition, for consonant recognition in the auditory-tactile and visual-tactile conditions, and in open-set word recognition in the visual-tactile condition. Information transmission analysis of the features of vowels and consonants indicated that the information from auditory and visual inputs were integrated much more effectively than information from the tactile input. The less effective combination might be due to lack of training with the tactile input, or to more fundamental limitations in the processing of multimodal stimuli.

  4. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  5. Listener orientation and spatial judgments of elevated auditory percepts

    Science.gov (United States)

    Parks, Anthony J.

    How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.

  6. Auditory evoked blink reflex in peripheral facial paresis.

    Science.gov (United States)

    Ayta, Semih; Sohtaoğlu, Melis; Uludüz, Derya; Uygunoğlu, Uğur; Tütüncü, Melih; Akalin, Mehmet Ali; Kiziltan, Meral E

    2015-02-01

    The auditory blink reflex (ABR) is a teleceptive reflex consisting of an early brief muscle contraction of the orbicularis oculi in response to sound stimuli. Constriction of the orbicularis oculi in response to auditory stimulation is accepted as a part of the startle reaction. The blink reflex and ABR might share a final common pathway, consisting of facial nerve nuclei and the facial nerve and may have common premotor neurons. In this study, the authors evaluated the value of the ABR in patients with peripheral facial palsy (PFP), cross-checking the results with commonly used blink reflex changes. In total, 83 subjects with PFP and 34 age-matched healthy volunteers were included. Auditory blink reflex was elicited in all control subjects and in 36 PFP cases on the paralytic sides (43.3%), whereas it was asymmetric in 30.1% of the patients. Auditory blink reflex positivity was significantly lower in PFP cases with increasing severity. Blink reflex results were largely correlated with ABR positivity. Auditory blink reflex is a useful readily elicited and sensitive test in PFP cases, providing parallel results to blink reflex and being affected by disease severity.

  7. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  8. Delayed visual maturation associated with auditory neuropathy/dyssynchrony.

    Science.gov (United States)

    Aldosari, Mohammed; Mabie, Ann; Husain, Aatif M

    2003-05-01

    Delayed visual maturation is a term used to describe infants who initially seem blind but subsequently have a marked improvement. The mechanism of visual loss and the subsequent improvement remains unknown. Auditory neuropathy/dyssynchrony is a condition of hearing impairment associated with absent or severely abnormal brainstem auditory evoked potentials but normal cochlear functions as measured by otoacoustic emissions. In this report, a 9-month-old infant who had no visual fixation for the first 3 months of life and congenital hearing impairment is described. Her brainstem auditory evoked potential study at 2.5 months of age showed no response to click stimuli presented at 90 dB nHL, whereas her otoacoustic emissions were normal. Subsequently, her vision and hearing improved. A brainstem auditory evoked potential study at 9 months of age showed reproducible waveforms. This case suggests the need for a detailed hearing evaluation of children with delayed visual maturation. Furthermore, this case highlights the need for follow-up brainstem auditory evoked potential testing prior to pursuing any audiologic intervention.

  9. Brainstem response to speech and non-speech stimuli in children with learning problems.

    Science.gov (United States)

    Malayeri, Saeed; Lotfi, Yones; Moossavi, Seyed Abdollah; Rostami, Reza; Faghihzadeh, Soghrat

    2014-07-01

    Neuronal firing synchronization is critical for recording auditory responses from the brainstem. Recent studies have shown that both click and/da/synthetic syllable (speech) stimuli perform well in evoking neuronal synchronization at the brainstem level. In the present study, brainstem responses to click and speech stimuli were compared between children with learning problems (LP) and those with normal learning (NL) abilities. The study included 49 children with LP and 34 children with NL. Auditory brainstem response (ABR) to 100-μs click stimulus and speech ABR (sABR) to/da/40-ms stimulus were tested in these children. Wave latencies III, V, and Vn and inter-peak latency (IPL) V-Vn in click ABR and wave latencies I, V, and A and IPL V-A in sABR were significantly longer in children with LP than children with NL. Except IPL of I-III, a significant positive correlation was observed between click ABR and sABR wave latencies and IPLs in children with NL; this correlation was weaker or not observed in children with LP. In this regard, the difference between correlation coefficients of wave latencies I, III, and V and IPLs I-V and V-Vn/V-A was significant in the two groups. Deficits in auditory processing timing in children with LP may have probably affected ABR for both click and speech stimuli. This finding emphasizes the possibility of shared connections between processing timing for speech and non-speech stimuli in auditory brainstem pathways. Weak or no correlation between click and speech ABR parameters in children with LP may have a clinical relevance and may be effectively used for objective diagnoses after confirming its sufficient sensitivity and specificity and demonstrating its acceptable validity with more scientific evidence. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Multisensory Attention in Motion: Uninformative Sounds Increase the Detectability of Direction Changes of Moving Visual Stimuli

    Directory of Open Access Journals (Sweden)

    Durk Talsma

    2011-10-01

    Full Text Available It has recently been shown that spatially uninformative sounds can cause a visual stimulus to pop-out from an array of similar distractor stimuli when that sound is presented near simultaneously with a feature change in the visual stimulus. Until now, this effect has only been shown for stimuli that remain at a fixed position. Here we extend these results by showing that auditory stimuli can also improve the detectability of visual stimulus features related to motion. To accomplish this we presented moving visual stimuli (small dots on a computer screen. At a random moment during a trial, one of these stimuli could abruptly start moving in an orthogonal direction. Participants' task was to indicate whether such a change in direction had occurred or not by making a corresponding button press. When a sound (a short 1000Hz tone pip was presented simultaneously with a motion change, participants were able to detect this motion direction change among a significantly higher number of distractor stimuli, compared to when the sound was absent. When the number of distractor stimuli was kept constant, detection accuracy was significantly higher when the tone was present, compared to when it was absent. Using signal detection theory, we determined that this change in accuracy was reflected in an increase in d“, while we found no evidence to suggest that participants' response bias (as reflected nearly equal beta parameters, changed due to the presence of the sounds.

  11. Hierarchical species distribution models

    Science.gov (United States)

    Hefley, Trevor J.; Hooten, Mevin B.

    2016-01-01

    Determining the distribution pattern of a species is important to increase scientific knowledge, inform management decisions, and conserve biodiversity. To infer spatial and temporal patterns, species distribution models have been developed for use with many sampling designs and types of data. Recently, it has been shown that count, presence-absence, and presence-only data can be conceptualized as arising from a point process distribution. Therefore, it is important to understand properties of the point process distribution. We examine how the hierarchical species distribution modeling framework has been used to incorporate a wide array of regression and theory-based components while accounting for the data collection process and making use of auxiliary information. The hierarchical modeling framework allows us to demonstrate how several commonly used species distribution models can be derived from the point process distribution, highlight areas of potential overlap between different models, and suggest areas where further research is needed.

  12. The effects of stimulus symmetry on hierarchical processing in infancy.

    Science.gov (United States)

    Guy, Maggie W; Reynolds, Greg D; Mosteller, Sara M; Dixon, Kate C

    2017-04-01

    The current study investigated the effects of stimulus symmetry on the processing of global and local stimulus properties by 6-month-old short- and long-looking infants through the use of event-related potentials (ERPs). Previous research has shown that individual differences in infant visual attention are related to hierarchical stimulus processing, such that short lookers show a global processing bias, while long lookers demonstrate a local processing bias (Guy, Reynolds, & Zhang, 2013). Additional research has shown that in comparison with asymmetry, symmetry is associated with more efficient stimulus processing and more accurate memory for stimulus configuration (Attneave, 1955; Perkins, 1932). In the current study, we utilized symmetric and asymmetric hierarchical stimuli and predicted that the presence of asymmetry would direct infant attention to the local features of stimuli, leading short lookers to regress to a local processing strategy. Results of the ERP analysis showed that infants familiarized with a symmetric stimulus showed evidence of global processing, while infants familiarized with an asymmetric stimulus did not demonstrate evidence of processing at the global or local level. These findings indicate that short- and long-looking infants, who might otherwise fail to process global stimulus properties due to limited visual scanning, may succeed at global processing when exposed to symmetric stimuli. Furthermore, stimulus symmetry may recruit selective attention toward global properties of visual stimuli, facilitating higher-level cognitive processing in infancy. © 2017 Wiley Periodicals, Inc.

  13. Auditory perception of self-similarity in water sounds.

    Directory of Open Access Journals (Sweden)

    Maria Neimark Geffen

    2011-05-01

    Full Text Available Many natural signals, including environmental sounds, exhibit scale-invariant statistics: their structure is repeated at multiple scales. Such scale invariance has been identified separately across spectral and temporal correlations of natural sounds (Clarke and Voss, 1975; Attias and Schreiner, 1997; Escabi et al., 2003; Singh and Theunissen, 2003. Yet the role of scale-invariance across overall spectro-temporal structure of the sound has not been explored directly in auditory perception. Here, we identify that the sound wave of a recording of running water is a self-similar fractal, exhibiting scale-invariance not only within spectral channels, but also across the full spectral bandwidth. The auditory perception of the water sound did not change with its scale. We tested the role of scale-invariance in perception by using an artificial sound, which could be rendered scale-invariant. We generated a random chirp stimulus: an auditory signal controlled by two parameters, Q, controlling the relative, and r, controlling the absolute, temporal structure of the sound. Imposing scale-invariant statistics on the artificial sound was required for its perception as natural and water-like. Further, Q had to be restricted to a specific range for the sound to be perceived as natural. To detect self-similarity in the water sound, and identify Q, the auditory system needs to process the temporal dynamics of the waveform across spectral bands in terms of the number of cycles, rather than absolute timing. We propose a two-stage neural model implementing this computation. This computation may be carried out by circuits of neurons in the auditory cortex. The set of auditory stimuli developed in this study are particularly suitable for measurements of response properties of neurons in the auditory pathway, allowing for quantification of the effects of varying the statistics of the spectro-temporal statistical structure of the stimulus.

  14. Incidental auditory category learning.

    Science.gov (United States)

    Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L

    2015-08-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).

  15. Modelling auditory attention.

    Science.gov (United States)

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  16. Auditory Channel Problems.

    Science.gov (United States)

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  17. Hierarchically Structured Electrospun Fibers

    Directory of Open Access Journals (Sweden)

    Nicole E. Zander

    2013-01-01

    Full Text Available Traditional electrospun nanofibers have a myriad of applications ranging from scaffolds for tissue engineering to components of biosensors and energy harvesting devices. The generally smooth one-dimensional structure of the fibers has stood as a limitation to several interesting novel applications. Control of fiber diameter, porosity and collector geometry will be briefly discussed, as will more traditional methods for controlling fiber morphology and fiber mat architecture. The remainder of the review will focus on new techniques to prepare hierarchically structured fibers. Fibers with hierarchical primary structures—including helical, buckled, and beads-on-a-string fibers, as well as fibers with secondary structures, such as nanopores, nanopillars, nanorods, and internally structured fibers and their applications—will be discussed. These new materials with helical/buckled morphology are expected to possess unique optical and mechanical properties with possible applications for negative refractive index materials, highly stretchable/high-tensile-strength materials, and components in microelectromechanical devices. Core-shell type fibers enable a much wider variety of materials to be electrospun and are expected to be widely applied in the sensing, drug delivery/controlled release fields, and in the encapsulation of live cells for biological applications. Materials with a hierarchical secondary structure are expected to provide new superhydrophobic and self-cleaning materials.

  18. Thoughts of death modulate psychophysical and cortical responses to threatening stimuli.

    Directory of Open Access Journals (Sweden)

    Elia Valentini

    Full Text Available Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS. Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations.

  19. Thoughts of Death Modulate Psychophysical and Cortical Responses to Threatening Stimuli

    Science.gov (United States)

    Valentini, Elia; Koch, Katharina; Aglioti, Salvatore Maria

    2014-01-01

    Existential social psychology studies show that awareness of one's eventual death profoundly influences human cognition and behaviour by inducing defensive reactions against end-of-life related anxiety. Much less is known about the impact of reminders of mortality on brain activity. Therefore we explored whether reminders of mortality influence subjective ratings of intensity and threat of auditory and painful thermal stimuli and the associated electroencephalographic activity. Moreover, we explored whether personality and demographics modulate psychophysical and neural changes related to mortality salience (MS). Following MS induction, a specific increase in ratings of intensity and threat was found for both nociceptive and auditory stimuli. While MS did not have any specific effect on nociceptive and auditory evoked potentials, larger amplitude of theta oscillatory activity related to thermal nociceptive activity was found after thoughts of death were induced. MS thus exerted a top-down modulation on theta electroencephalographic oscillatory amplitude, specifically for brain activity triggered by painful thermal stimuli. This effect was higher in participants reporting higher threat perception, suggesting that inducing a death-related mind-set may have an influence on body-defence related somatosensory representations. PMID:25386905

  20. Multisensory Processing of Gustatory Stimuli

    OpenAIRE

    Simon, S A; de Araujo, I.E.; Stapleton, J. R.; Nicolelis, M. A. L.

    2008-01-01

    Gustatory perception is inherently multimodal, since approximately the same time that intra-oral stimuli activate taste receptors, somatosensory information is concurrently sent to the CNS. We review evidence that gustatory perception is intrinsically linked to concurrent somatosensory processing. We will show that processing of multisensory information can occur at the level of the taste cells through to the gustatory cortex. We will also focus on the fact that the same chemical and physical...

  1. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS...

  2. Evidence for the auditory P3a reflecting an automatic process: elicitation during highly-focused continuous visual attention.

    Science.gov (United States)

    Muller-Gass, Alexandra; Macdonald, Margaret; Schröger, Erich; Sculthorpe, Lauren; Campbell, Kenneth

    2007-09-19

    The P3a is an event-related potential (ERP) component believed to reflect an attention-switch to task-irrelevant stimuli or stimulus information. The present study concerns the automaticity of the processes underlying the auditory P3a. More specifically, we investigated whether the auditory P3a is an attention-independent component, that is, whether it can still be elicited under highly-focused selective attention to a different (visual) channel. Furthermore, we examined whether the auditory P3a can be modulated by the demands of the visual diversion task. Subjects performed a continuous visual tracking task that varied in difficulty, based on the number of objects to-be-tracked. Task-irrelevant auditory stimuli were presented at very rapid and random rates concurrently to the visual task. The auditory sequence included rare increments (+10 dB) and decrements (-20 dB) in intensity relative to the frequently-presented standard stimulus. Importantly, the auditory deviant stimuli elicited a significant P3a during the most difficult visual task, when conditions were optimised to prevent attentional slippage to the auditory channel. This finding suggests that the elicitation of the auditory P3a does not require available central capacity, and confirms the automatic nature of the processes underlying this ERP component. Moreover, the difficulty of the visual task did not modulate either the mismatch negativity (MMN) or the P3a but did have an effect on a late (350-400 ms) negativity, an ERP deflection perhaps related to a subsequent evaluation of the auditory change. Together, these results imply that the auditory P3a could reflect a strongly-automatic process, one that does not require and is not modulated by attention.

  3. Prior auditory information shapes visual category-selectivity in ventral occipito-temporal cortex.

    Science.gov (United States)

    Adam, Ruth; Noppeney, Uta

    2010-10-01

    Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.

  4. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  5. Eye movement preparation causes spatially-specific modulation of auditory processing: new evidence from event-related brain potentials.

    Science.gov (United States)

    Gherri, Elena; Driver, Jon; Eimer, Martin

    2008-08-11

    To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.

  6. Odors bias time perception in visual and auditory modalities

    Directory of Open Access Journals (Sweden)

    Zhenzhu eYue

    2016-04-01

    Full Text Available Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 ms or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor. The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a

  7. Odors Bias Time Perception in Visual and Auditory Modalities.

    Science.gov (United States)

    Yue, Zhenzhu; Gao, Tianyu; Chen, Lihan; Wu, Jiashuang

    2016-01-01

    Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of

  8. Mismatch negativity in children with specific language impairment and auditory processing disorder

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2015-08-01

    Full Text Available INTRODUCTION: Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing.OBJECTIVE: To investigate complex acoustic signals (speech encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm.METHODS: It was a prospective study. 75 children (6-12 years participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent and /da/ (rare. Measures of mismatch negativity latency and two amplitude measures were analyzed.RESULTS: It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development.CONCLUSION: These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception.

  9. Activation of auditory white matter tracts as revealed by functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Tae, Woo Suk [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Yakunina, Natalia; Nam, Eui-Cheol [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Otolaryngology, School of Medicine, Chuncheon, Kangwon-do (Korea, Republic of); Kim, Tae Su [Kangwon National University Hospital, Department of Otolaryngology, Chuncheon (Korea, Republic of); Kim, Sam Soo [Kangwon National University, Neuroscience Research Institute, School of Medicine, Chuncheon (Korea, Republic of); Kangwon National University, Department of Radiology, School of Medicine, Chuncheon (Korea, Republic of)

    2014-07-15

    The ability of functional magnetic resonance imaging (fMRI) to detect activation in brain white matter (WM) is controversial. In particular, studies on the functional activation of WM tracts in the central auditory system are scarce. We utilized fMRI to assess and characterize the entire auditory WM pathway under robust experimental conditions involving the acquisition of a large number of functional volumes, the application of broadband auditory stimuli of high intensity, and the use of sparse temporal sampling to avoid scanner noise effects and increase signal-to-noise ratio. Nineteen healthy volunteers were subjected to broadband white noise in a block paradigm; each run had four sound-on/off alternations and was repeated nine times for each subject. Sparse sampling (TR = 8 s) was used. In addition to traditional gray matter (GM) auditory center activation, WM activation was detected in the isthmus and midbody of the corpus callosum (CC), tapetum, auditory radiation, lateral lemniscus, and decussation of the superior cerebellar peduncles. At the individual level, 13 of 19 subjects (68 %) had CC activation. Callosal WM exhibited a temporal delay of approximately 8 s in response to the stimulation compared with GM. These findings suggest that direct evaluation of the entire functional network of the central auditory system may be possible using fMRI, which may aid in understanding the neurophysiological basis of the central auditory system and in developing treatment strategies for various central auditory disorders. (orig.)

  10. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  11. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  12. Mismatch negativity in children with specific language impairment and auditory processing disorder.

    Science.gov (United States)

    Rocha-Muniz, Caroline Nunes; Befi-Lopes, Débora Maria; Schochat, Eliane

    2015-01-01

    Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing. To investigate complex acoustic signals (speech) encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm. It was a prospective study. 75 children (6-12 years) participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent) and /da/ (rare). Measures of mismatch negativity latency and two amplitude measures were analyzed. It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development. These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  13. Experimental Evaluation of Auditory Cognition's Effects on Visual Cognition of Video

    Science.gov (United States)

    Kamitani, Tatsuo; Haruki, Kazuhito; Matsuda, Minoru

    This paper presents the experimental evaluation of auditory cognition's effects on visual cognition of video. The influences of seven auditory stimuli on visual recognition are investigated based on experimental data of key-down operations. The key-down operations for locating a moving target by visual and auditory images are monitored by an experiment system originally made by devices including VTR, CRT, Data Recorder, etc.. Regression analysis and EM algorithm are applied to analyzing the experiment data of 350 key-down operations, made with 50 people and 7 auditory stimulus types. The following characteristic results about the influence of auditory stimulus on visual recognition are derived. Firstly, seven people responded too early for every experiment. The average of and the standard deviation of their response times are 439[ms] and 231[ms] respectively. Secondly, the other forty three people responded about 10[ms] after at cases, in which auditory images were presented 30[ms] or 60[ms] before visual images. Also they responded about 10[ms] early at the other cases. Thirdly, as the visual image was dominant information used for the key-down decision making, apparent effects of auditory images on the key-down operation were not measured. Averages and standard deviations of distributions measured by EM algorithm, regarding to 7 auditory stimulus types, are considered and verified with the Card's MHP model of human response.

  14. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  15. Auditory Attraction: Activation of Visual Cortex by Music and Sound in Williams Syndrome

    Science.gov (United States)

    Thornton-Wells, Tricia A.; Cannistraci, Christopher J.; Anderson, Adam W.; Kim, Chai-Youn; Eapen, Mariam; Gore, John C.; Blake, Randolph; Dykens, Elisabeth M.

    2010-01-01

    Williams syndrome is a genetic neurodevelopmental disorder with a distinctive phenotype, including cognitive-linguistic features, nonsocial anxiety, and a strong attraction to music. We performed functional MRI studies examining brain responses to musical and other types of auditory stimuli in young adults with Williams syndrome and typically…

  16. The Nature of Auditory Discrimination Problems in Children with Specific Language Impairment: An MMN Study

    Science.gov (United States)

    Davids, Nina; Segers, Eliane; van den Brink, Danielle; Mitterer, Holger; van Balkom, Hans; Hagoort, Peter; Verhoeven, Ludo

    2011-01-01

    Many children with specific language impairment (SLI) show impairments in discriminating auditorily presented stimuli. The present study investigates whether these discrimination problems are speech specific or of a general auditory nature. This was studied using a linguistic and nonlinguistic contrast that were matched for acoustic complexity in…

  17. The Analysis and Treatment of Problem Behavior Evoked by Auditory Stimulation

    Science.gov (United States)

    Devlin, Sarah; Healy, Olive; Leader, Geraldine; Reed, Phil

    2008-01-01

    The current study aimed to identify specific stimuli associated with music that served as an establishing operation (EO) for the problem behavior of a 6-year-old child with a diagnosis of autism. Specific EOs for problem behavior evoked by auditory stimulation could be identified. A differential negative reinforcement procedure was implemented for…

  18. Effects of diazepam on auditory evoked potentials of rats elicited in a ten-tone paradigm

    NARCIS (Netherlands)

    Jongsma, M.L.A.; Rijn, C.M. van; Schaijk, W.J. van; Coenen, A.M.L.; Dirksen, R.

    2000-01-01

    The effect of diazepam on sensory gating was studied in rats, by measuring diazepam effects on Auditory Evoked Potentials (AEPs) elicited in a ten-tone paradigm. Trains of 10 repetitive tone-pip stimuli were presented. Rats (n=8) received 4 mg.kg-1 diazepam s.c. or vehicle, counterbalanced over two

  19. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  20. Five- and Eight-Year-Old Children's Response to Auditory and Visual Distraction.

    Science.gov (United States)

    Hale, Gordon A.; Stevenson, Edward E., Jr.

    An assessment was made of 5- and 8-year-old children's performance on a short-term memory task under two auditory and two visual distraction conditions, as well as under a nondistraction condition. Performance under nondistraction was found to be superior to that under distraction (p<.001), indicating that the extraneous stimuli had a generally…

  1. Selective attention and the auditory vertex potential. 1: Effects of stimulus delivery rate

    Science.gov (United States)

    Schwent, V. L.; Hillyard, S. A.; Galambos, R.

    1975-01-01

    Enhancement of the auditory vertex potentials with selective attention to dichotically presented tone pips was found to be critically sensitive to the range of inter-stimulus intervals in use. Only at the shortest intervals was a clear-cut enhancement of the latency component to stimuli observed for the attended ear.

  2. A Persian version of the sustained auditory attention capacity test and its results in normal children

    Directory of Open Access Journals (Sweden)

    Sanaz Soltanparast

    2013-03-01

    Full Text Available Background and Aim: Sustained attention refers to the ability to maintain attention in target stimuli over a sustained period of time. This study was conducted to develop a Persian version of the sustained auditory attention capacity test and to study its results in normal children.Methods: To develop the Persian version of the sustained auditory attention capacity test, like the original version, speech stimuli were used. The speech stimuli consisted of one hundred monosyllabic words consisting of a 20 times random of and repetition of the words of a 21-word list of monosyllabic words, which were randomly grouped together. The test was carried out at comfortable hearing level using binaural, and diotic presentation modes on 46 normal children of 7 to 11 years of age of both gender.Results: There was a significant difference between age, and an average of impulsiveness error score (p=0.004 and total score of sustained auditory attention capacity test (p=0.005. No significant difference was revealed between age, and an average of inattention error score and attention reduction span index. Gender did not have a significant impact on various indicators of the test.Conclusion: The results of this test on a group of normal hearing children confirmed its ability to measure sustained auditory attention capacity through speech stimuli.

  3. Processing of Binaural Pitch Stimuli in Hearing-Impaired Listeners

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2009-01-01

    Binaural pitch is a tonal sensation produced by introducing a frequency-dependent interaural phase shift in binaurally presented white noise. As no spectral cues are present in the physical stimulus, binaural pitch perception is assumed to rely on accurate temporal fine structure coding and intact...... binaural integration mechanisms. This study investigated to what extent basic auditory measures of binaural processing as well as cognitive abilities are correlated with the ability of hearing-impaired listeners to perceive binaural pitch. Subjects from three groups (1: normal-hearing; 2: cochlear...... hearingloss; 3: retro-cochlear impairment) were asked to identify the pitch contour of series of five notes of equal duration, ranging from 523 to 784 Hz, played either with Huggins’ binaural pitch stimuli (BP) or perceptually similar, but monaurally detectable, pitches (MP). All subjects from groups 1 and 2...

  4. Across-ear stimulus-specific adaptation in the auditory cortex

    Science.gov (United States)

    Xu, Xinxiu; Yu, Xiongjie; He, Jufang; Nelken, Israel

    2014-01-01

    The ability to detect unexpected or deviant events in natural scenes is critical for survival. In the auditory system, neurons from the midbrain to cortex adapt quickly to repeated stimuli but this adaptation does not fully generalize to other rare stimuli, a phenomenon called stimulus-specific adaptation (SSA). Most studies of SSA were conducted with pure tones of different frequencies, and it is by now well-established that SSA to tone frequency is strong and robust in auditory cortex. Here we tested SSA in the auditory cortex to the ear of stimulation using broadband noise. We show that cortical neurons adapt specifically to the ear of stimulation, and that the contrast between the responses to stimulation of the same ear when rare and when common depends on the binaural interaction class of the neurons. PMID:25126058

  5. Across-ear stimulus-specific adaptation in the auditory cortex

    Directory of Open Access Journals (Sweden)

    Xinxiu eXu

    2014-07-01

    Full Text Available The ability to detect unexpected or deviant events in natural scenes is critical for survival. In the auditory system, neurons from the midbrain to cortex adapt quickly to repeated stimuli but this adaptation does not fully generalize to other, rare stimuli, a phenomenon called stimulus-specific adaptation (SSA. Most studies of SSA were conducted with pure tones of different frequencies, and it is by now well-established that SSA to tone frequency is strong and robust in auditory cortex. Here we tested SSA in the auditory cortex to the ear of stimulation using broadband noise. We show that cortical neurons adapt specifically to the ear of stimulation, and that the contrast between the responses to stimulation of the same ear when rare and when common depends on the binaural interaction class of the neurons.

  6. Auditory and Visual Electrophysiology of Deaf Children with Cochlear Implants: Implications for Cross-modal Plasticity

    Science.gov (United States)

    Corina, David P.; Blau, Shane; LaMarr, Todd; Lawyer, Laurel A.; Coffey-Corina, Sharon

    2017-01-01

    Deaf children who receive a cochlear implant early in life and engage in intensive oral/aural therapy often make great strides in spoken language acquisition. However, despite clinicians’ best efforts, there is a great deal of variability in language outcomes. One concern is that cortical regions which normally support auditory processing may become reorganized for visual function, leaving fewer available resources for auditory language acquisition. The conditions under which these changes occur are not well understood, but we may begin investigating this phenomenon by looking for interactions between auditory and visual evoked cortical potentials in deaf children. If children with abnormal auditory responses show increased sensitivity to visual stimuli, this may indicate the presence of maladaptive cortical plasticity. We recorded evoked potentials, using both auditory and visual paradigms, from 25 typical hearing children and 26 deaf children (ages 2–8 years) with cochlear implants. An auditory oddball paradigm was used (85% /ba/ syllables vs. 15% frequency modulated tone sweeps) to elicit an auditory P1 component. Visual evoked potentials (VEPs) were recorded during presentation of an intermittent peripheral radial checkerboard while children watched a silent cartoon, eliciting a P1–N1 response. We observed reduced auditory P1 amplitudes and a lack of latency shift associated with normative aging in our deaf sample. We also observed shorter latencies in N1 VEPs to visual stimulus offset in deaf participants. While these data demonstrate cortical changes associated with auditory deprivation, we did not find evidence for a relationship between cortical auditory evoked potentials and the VEPs. This is consistent with descriptions of intra-modal plasticity within visual systems of deaf children, but do not provide evidence for cross-modal plasticity. In addition, we note that sign language experience had no effect on deaf children’s early auditory and visual

  7. Auditory interference control in children with learning disability: An exploratory study.

    Science.gov (United States)

    Thomas, Roha M; Kaipa, Ramesh; Ganesh, Attigodu Chandrashekara

    2015-12-01

    The current study aimed to compare the auditory interference control of participants with Learning Disability (LD) to a control group on two versions of an auditory Stroop task. A group of eight children with LD (clinical group) and another group of eight typically developing children (control group) served as participants. All the participants were involved in a semantic and a gender identification-based auditory Stroop task. Each participant was presented with eight different words (10 times) that were pre-recorded by a male and a female speaker. The semantic task required the participants to ignore the speaker's gender and attend to the meaning of the word, and vice-versa for the gender identification task. The participants' performance accuracy and reaction time (RT) was measured on both the tasks. Control group participants significantly outperformed the clinical group participants on both the tasks with regard to performance accuracy as well as RT. The results suggest that children with LD have problems in suppressing irrelevant auditory stimuli and focusing on the relevant auditory stimuli. This can be attributed to the auditory processing problems in these children. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Cortical Auditory-Evoked Responses in Preterm Neonates: Revisited by Spectral and Temporal Analyses.

    Science.gov (United States)

    Kaminska, A; Delattre, V; Laschet, J; Dubois, J; Labidurie, M; Duval, A; Manresa, A; Magny, J-F; Hovhannisyan, S; Mokhtari, M; Ouss, L; Boissel, A; Hertz-Pannier, L; Sintsov, M; Minlebaev, M; Khazipov, R; Chiron, C

    2017-08-11

    Characteristic preterm EEG patterns of "Delta-brushes" (DBs) have been reported in the temporal cortex following auditory stimuli, but their spatio-temporal dynamics remains elusive. Using 32-electrode EEG recordings and co-registration of electrodes' position to 3D-MRI of age-matched neonates, we explored the cortical auditory-evoked responses (AERs) after 'click' stimuli in 30 healthy neonates aged 30-38 post-menstrual weeks (PMW). (1) We visually identified auditory-evoked DBs within AERs in all the babies between 30 and 33 PMW and a decreasing response rate afterwards. (2) The AERs showed an increase in EEG power from delta to gamma frequency bands over the middle and posterior temporal regions with higher values in quiet sleep and on the right. (3) Time-frequency and averaging analyses showed that the delta component of DBs, which negatively peaked around 550 and 750 ms over the middle and posterior temporal regions, respectively, was superimposed with fast (alpha-gamma) oscillations and corresponded to the late part of the cortical auditory-evoked potential (CAEP), a feature missed when using classical CAEP processing. As evoked DBs rate and AERs delta to alpha frequency power decreased until full term, auditory-evoked DBs are thus associated with the prenatal development of auditory processing and may suggest an early emerging hemispheric specialization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Auditory and visual interhemispheric communication in musicians and non-musicians.

    Science.gov (United States)

    Woelfle, Rebecca; Grahn, Jessica A

    2013-01-01

    The corpus callosum (CC) is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs) for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time), or to the contralateral hemisphere (crossed reaction time). Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD) for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  10. Auditory and visual interhemispheric communication in musicians and non-musicians.

    Directory of Open Access Journals (Sweden)

    Rebecca Woelfle

    Full Text Available The corpus callosum (CC is a brain structure composed of axon fibres linking the right and left hemispheres. Musical training is associated with larger midsagittal cross-sectional area of the CC, suggesting that interhemispheric communication may be faster in musicians. Here we compared interhemispheric transmission times (ITTs for musicians and non-musicians. ITT was measured by comparing simple reaction times to stimuli presented to the same hemisphere that controlled a button-press response (uncrossed reaction time, or to the contralateral hemisphere (crossed reaction time. Both visual and auditory stimuli were tested. We predicted that the crossed-uncrossed difference (CUD for musicians would be smaller than for non-musicians as a result of faster interhemispheric transfer times. We did not expect a difference in CUDs between the visual and auditory modalities for either musicians or non-musicians, as previous work indicates that interhemispheric transfer may happen through the genu of the CC, which contains motor fibres rather than sensory fibres. There were no significant differences in CUDs between musicians and non-musicians. However, auditory CUDs were significantly smaller than visual CUDs. Although this auditory-visual difference was larger in musicians than non-musicians, the interaction between modality and musical training was not significant. Therefore, although musical training does not significantly affect ITT, the crossing of auditory information between hemispheres appears to be faster than visual information, perhaps because subcortical pathways play a greater role for auditory interhemispheric transfer.

  11. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  12. Auditory object cognition in dementia

    Science.gov (United States)

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  13. Context updates are hierarchical

    Directory of Open Access Journals (Sweden)

    Anton Karl Ingason

    2016-10-01

    Full Text Available This squib studies the order in which elements are added to the shared context of interlocutors in a conversation. It focuses on context updates within one hierarchical structure and argues that structurally higher elements are entered into the context before lower elements, even if the structurally higher elements are pronounced after the lower elements. The crucial data are drawn from a comparison of relative clauses in two head-initial languages, English and Icelandic, and two head-final languages, Korean and Japanese. The findings have consequences for any theory of a dynamic semantics.

  14. Auditory Reserve and the Legacy of Auditory Experience

    OpenAIRE

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...

  15. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  16. Detecting Hierarchical Structure in Networks

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    a generative Bayesian model that is able to infer whether hierarchies are present or not from a hypothesis space encompassing all types of hierarchical tree structures. For efficient inference we propose a collapsed Gibbs sampling procedure that jointly infers a partition and its hierarchical structure......Many real-world networks exhibit hierarchical organization. Previous models of hierarchies within relational data has focused on binary trees; however, for many networks it is unknown whether there is hierarchical structure, and if there is, a binary tree might not account well for it. We propose....... On synthetic and real data we demonstrate that our model can detect hierarchical structure leading to better link-prediction than competing models. Our model can be used to detect if a network exhibits hierarchical structure, thereby leading to a better comprehension and statistical account the network....

  17. Biomimetic Sonar for Electrical Activation of the Auditory Pathway

    Directory of Open Access Journals (Sweden)

    D. Menniti

    2017-01-01

    Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.

  18. Early hominin auditory capacities.

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  19. Early hominin auditory capacities

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  20. Silent music reading: auditory imagery and visuotonal modality transfer in singers and non-singers.

    Science.gov (United States)

    Hoppe, Christian; Splittstößer, Christoph; Fliessbach, Klaus; Trautner, Peter; Elger, Christian E; Weber, Bernd

    2014-11-01

    In daily life, responses are often facilitated by anticipatory imagery of expected targets which are announced by associated stimuli from different sensory modalities. Silent music reading represents an intriguing case of visuotonal modality transfer in working memory as it induces highly defined auditory imagery on the basis of presented visuospatial information (i.e. musical notes). Using functional MRI and a delayed sequence matching-to-sample paradigm, we compared brain activations during retention intervals (10s) of visual (VV) or tonal (TT) unimodal maintenance versus visuospatial-to-tonal modality transfer (VT) tasks. Visual or tonal sequences were comprised of six elements, white squares or tones, which were low, middle, or high regarding vertical screen position or pitch, respectively (presentation duration: 1.5s). For the cross-modal condition (VT, session 3), the visuospatial elements from condition VV (session 1) were re-defined as low, middle or high "notes" indicating low, middle or high tones from condition TT (session 2), respectively, and subjects had to match tonal sequences (probe) to previously presented note sequences. Tasks alternately had low or high cognitive load. To evaluate possible effects of music reading expertise, 15 singers and 15 non-musicians were included. Scanner task performance was excellent in both groups. Despite identity of applied visuospatial stimuli, visuotonal modality transfer versus visual maintenance (VT>VV) induced "inhibition" of visual brain areas and activation of primary and higher auditory brain areas which exceeded auditory activation elicited by tonal stimulation (VT>TT). This transfer-related visual-to-auditory activation shift occurred in both groups but was more pronounced in experts. Frontoparietal areas were activated by higher cognitive load but not by modality transfer. The auditory brain showed a potential to anticipate expected auditory target stimuli on the basis of non-auditory information and

  1. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  2. Nested and Hierarchical Archimax copulas

    KAUST Repository

    Hofert, Marius

    2017-07-03

    The class of Archimax copulas is generalized to nested and hierarchical Archimax copulas in several ways. First, nested extreme-value copulas or nested stable tail dependence functions are introduced to construct nested Archimax copulas based on a single frailty variable. Second, a hierarchical construction of d-norm generators is presented to construct hierarchical stable tail dependence functions and thus hierarchical extreme-value copulas. Moreover, one can, by itself or additionally, introduce nested frailties to extend Archimax copulas to nested Archimax copulas in a similar way as nested Archimedean copulas extend Archimedean copulas. Further results include a general formula for the density of Archimax copulas.

  3. A hierarchical stochastic model for bistable perception.

    Directory of Open Access Journals (Sweden)

    Stefan Albert

    2017-11-01

    Full Text Available Viewing of ambiguous stimuli can lead to bistable perception alternating between the possible percepts. During continuous presentation of ambiguous stimuli, percept changes occur as single events, whereas during intermittent presentation of ambiguous stimuli, percept changes occur at more or less regular intervals either as single events or bursts. Response patterns can be highly variable and have been reported to show systematic differences between patients with schizophrenia and healthy controls. Existing models of bistable perception often use detailed assumptions and large parameter sets which make parameter estimation challenging. Here we propose a parsimonious stochastic model that provides a link between empirical data analysis of the observed response patterns and detailed models of underlying neuronal processes. Firstly, we use a Hidden Markov Model (HMM for the times between percept changes, which assumes one single state in continuous presentation and a stable and an unstable state in intermittent presentation. The HMM captures the observed differences between patients with schizophrenia and healthy controls, but remains descriptive. Therefore, we secondly propose a hierarchical Brownian model (HBM, which produces similar response patterns but also provides a relation to potential underlying mechanisms. The main idea is that neuronal activity is described as an activity difference between two competing neuronal populations reflected in Brownian motions with drift. This differential activity generates switching between the two conflicting percepts and between stable and unstable states with similar mechanisms on different neuronal levels. With only a small number of parameters, the HBM can be fitted closely to a high variety of response patterns and captures group differences between healthy controls and patients with schizophrenia. At the same time, it provides a link to mechanistic models of bistable perception, linking the group

  4. Development of visuo-auditory integration in space and time

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2012-09-01

    Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.

  5. On the Relevance of Natural Stimuli for the Study of Brainstem Correlates: The Example of Consonance Perception.

    Directory of Open Access Journals (Sweden)

    Marion Cousineau

    Full Text Available Some combinations of musical tones sound pleasing to Western listeners, and are termed consonant, while others sound discordant, and are termed dissonant. The perceptual phenomenon of consonance has been traced to the acoustic property of harmonicity. It has been repeatedly shown that neural correlates of consonance can be found as early as the auditory brainstem as reflected in the harmonicity of the scalp-recorded frequency-following response (FFR. "Neural Pitch Salience" (NPS measured from FFRs-essentially a time-domain equivalent of the classic pattern recognition models of pitch-has been found to correlate with behavioral judgments of consonance for synthetic stimuli. Following the idea that the auditory system has evolved to process behaviorally relevant natural sounds, and in order to test the generalizability of this finding made with synthetic tones, we recorded FFRs for consonant and dissonant intervals composed of synthetic and natural stimuli. We found that NPS correlated with behavioral judgments of consonance and dissonance for synthetic but not for naturalistic sounds. These results suggest that while some form of harmonicity can be computed from the auditory brainstem response, the general percept of consonance and dissonance is not captured by this measure. It might either be represented in the brainstem in a different code (such as place code or arise at higher levels of the auditory pathway. Our findings further illustrate the importance of using natural sounds, as a complementary tool to fully-controlled synthetic sounds, when probing auditory perception.

  6. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  7. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    Science.gov (United States)

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  8. Presenting multiple auditory signals using multiple sound cards in Visual Basic 6.0.

    Science.gov (United States)

    Chan, Jason S; Spence, Charles

    2003-02-01

    In auditory research, it is often desirable to present more than two auditory stimuli at any one time. Although the technology has been available for some time, the majority of researchers have not utilized it. This article provides a simple means of presenting multiple, concurrent, independent auditory events, using two or more different sound cards installed within a single computer. By enabling the presentation of more auditory events, we can hope to gain a better understanding of the cognitive and attentional processes operating under more complex and realistic scenes, such as that embodied by the cocktail party effect. The software requirements are Windows 98SR2/Me/NT4/2000/XP, Visual Basic 6.0, and DirectX 7.0 or above. The hardware requirements are a Pentium II, 128 MB RAM, and two or more different sound cards.

  9. Auditory processing in children with language-based learning problems: a magnetencephalography study.

    Science.gov (United States)

    Diedler, Jennifer; Pietz, Joachim; Brunner, Monika; Hornberger, Cornelia; Bast, Thomas; Rupp, André

    2009-06-17

    We examined basic auditory temporal processing in children with language-based learning problems (LPs) applying magnetencephalography. Auditory-evoked fields of 43 children (27 LP, 16 controls) were recorded while passively listening to 100-ms white noise bursts with temporal gaps of 3, 6, 10 and 30 ms inserted after 5 or 50 ms. The P1m was evaluated by spatio-temporal source analysis. Psychophysical gap-detection thresholds were obtained for the same participants. Thirty-two percent of the LP children were not able to perform the early gap psychoacoustic task. In addition, LP children displayed a significant delay of the P1m during the early gap task. These findings provide evidence for a diminished neuronal representation of short auditory stimuli in the primary auditory cortex of LP children.

  10. Single neuron and population coding of natural sounds in auditory cortex.

    Science.gov (United States)

    Mizrahi, Adi; Shalev, Amos; Nelken, Israel

    2014-02-01

    The auditory system drives behavior using information extracted from sounds. Early in the auditory hierarchy, circuits are highly specialized for detecting basic sound features. However, already at the level of the auditory cortex the functional organization of the circuits and the underlying coding principles become different. Here, we review some recent progress in our understanding of single neuron and population coding in primary auditory cortex, focusing on natural sounds. We discuss possible mechanisms explaining why single neuron responses to simple sounds cannot predict responses to natural stimuli. We describe recent work suggesting that structural features like local subnetworks rather than smoothly mapped tonotopy are essential components of population coding. Finally, we suggest a synthesis of how single neurons and subnetworks may be involved in coding natural sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Long-term memory of hierarchical relationships in free-living greylag geese

    NARCIS (Netherlands)

    Weiss, Brigitte M.; Scheiber, Isabella B. R.

    Animals may memorise spatial and social information for many months and even years. Here, we investigated long-term memory of hierarchically ordered relationships, where the position of a reward depended on the relationship of a stimulus relative to other stimuli in the hierarchy. Seventeen greylag

  12. Cortical Evoked Potentials and Hearing Aids in Individuals with Auditory Dys-Synchrony.

    Science.gov (United States)

    Yuvaraj, Pradeep; Mannarukrishnaiah, Jayaram

    2015-12-01

    The purpose of the present study was to investigate the relationship between cortical processing of speech and benefit from hearing aids in individuals with auditory dys-synchrony. Data were collected from 38 individuals with auditory dys-synchrony. Participants were selected based on hearing thresholds, middle ear reflexes, otoacoustic emissions, and auditory brain stem responses. Cortical-evoked potentials were recorded for click and speech. Participants with auditory dys-synchrony were fitted with bilateral multichannel wide dynamic range compression hearing aids. Aided and unaided speech identification scores for 40 words were obtained for each participant. Hierarchical cluster analysis using Ward's method clearly showed four subgroups of participants with auditory dys-synchrony based on the hearing aid benefit score (aided minus unaided speech identification score). The difference in the mean aided and unaided speech identification scores was significantly different in participants with auditory dys-synchrony. However, the mean unaided speech identification scores were not significantly different between the four subgroups. The N2 amplitude and P1 latency of the speech-evoked cortical potentials were significantly different between the four subgroups formed based on hearing aid benefit scores. The results indicated that subgroups of individuals with auditory dys-synchrony who benefit from hearing aids exist. Individuals who benefitted from hearing aids showed decreased N2 amplitudes compared with those who did not. N2 amplitude is associated with greater suppression of background noise while processing speech.

  13. Auditory free classification of nonnative speech.

    Science.gov (United States)

    Atagi, Eriko; Bent, Tessa

    2013-11-01

    Through experience with speech variability, listeners build categories of indexical speech characteristics including categories for talker, gender, and dialect. The auditory free classification task-a task in which listeners freely group talkers based on audio samples-has been a useful tool for examining listeners' representations of some of these characteristics including regional dialects and different languages. The free classification task was employed in the current study to examine the perceptual representation of nonnative speech. The category structure and salient perceptual dimensions of nonnative speech were investigated from two perspectives: general similarity and perceived native language background. Talker intelligibility and whether native talkers were included were manipulated to test stimulus set effects. Results showed that degree of accent was a highly salient feature of nonnative speech for classification based on general similarity and on perceived native language background. This salience, however, was attenuated when listeners were listening to highly intelligible stimuli and attending to the talkers' native language backgrounds. These results suggest that the context in which nonnative speech stimuli are presented-such as the listeners' attention to the talkers' native language and the variability of stimulus intelligibility-can influence listeners' perceptual organization of nonnative speech.

  14. CORTICAL RESPONSES TO SALIENT NOCICEPTIVE AND NOT NOCICEPTIVE STIMULI IN VEGETATIVE AND MINIMAL CONSCIOUS STATE

    Directory of Open Access Journals (Sweden)

    MARINA eDE TOMMASO

    2015-01-01

    Full Text Available Aims Questions regarding perception of pain in non-communicating patients and the management of pain continue to raise controversy both at a clinical and ethical level. The aim of this study was to examine the cortical response to salient multimodal visual, acoustic, somatosensory electric non nociceptive and nociceptive laser stimuli and their correlation with the clinical evaluation.Methods: Five Vegetative State (VS, 4 Minimally Conscious State (MCS patients and 11 age- and sex-matched controls were examined. Evoked responses were obtained by 64 scalp electrodes, while delivering auditory, visual, non-noxious electrical and noxious laser stimulation, which were randomly presented every 10 sec. Laser, somatosensory, auditory and visual evoked responses were identified as a negative-positive (N2-P2 vertex complex in the 500 msec post-stimulus time. We used Nociception Coma Scale-Revised (NCS-R and Coma Recovery Scale (CRS-R for clinical evaluation of pain perception and consciousness impairment.Results: The laser evoked potentials (LEPs were recognizable in all cases. Only one MCS patient showed a reliable cortical response to all the employed stimulus modalities. One VS patient did not present cortical responses to any other stimulus modality. In the remaining participants, auditory, visual and electrical related potentials were inconstantly present. Significant N2 and P2 latency prolongation occurred in both VS and MCS patients. The presence of a reliable cortical response to auditory, visual and electric stimuli was able to correctly classify VS and MCS patients with 90% accuracy. Laser P2 and N2 amplitudes were not correlated with the CRS-R and NCS-R scores, while auditory and electric related potentials amplitude were associated with the motor response to pain and consciousness recovery. Discussion: pain arousal may be a primary function also in vegetative state patients while the relevance of other stimulus modalities may indicate the

  15. Nonverbal auditory agnosia with lesion to Wernicke's area.

    Science.gov (United States)

    Saygin, Ayse Pinar; Leech, Robert; Dick, Frederic

    2010-01-01

    We report the case of patient M, who suffered unilateral left posterior temporal and parietal damage, brain regions typically associated with language processing. Language function largely recovered since the infarct, with no measurable speech comprehension impairments. However, the patient exhibited a severe impairment in nonverbal auditory comprehension. We carried out extensive audiological and behavioral testing in order to characterize M's unusual neuropsychological profile. We also examined the patient's and controls' neural responses to verbal and nonverbal auditory stimuli using functional magnetic resonance imaging (fMRI). We verified that the patient exhibited persistent and severe auditory agnosia for nonverbal sounds in the absence of verbal comprehension deficits or peripheral hearing problems. Acoustical analyses suggested that his residual processing of a minority of environmental sounds might rely on his speech processing abilities. In the patient's brain, contralateral (right) temporal cortex as well as perilesional (left) anterior temporal cortex were strongly responsive to verbal, but not to nonverbal sounds, a pattern that stands in marked contrast to the controls' data. This substantial reorganization of auditory processing likely supported the recovery of M's speech processing.

  16. Visual Timing of Structured Dance Movements Resembles Auditory Rhythm Perception

    Directory of Open Access Journals (Sweden)

    Yi-Huang Su

    2016-01-01

    Full Text Available Temporal mechanisms for processing auditory musical rhythms are well established, in which a perceived beat is beneficial for timing purposes. It is yet unknown whether such beat-based timing would also underlie visual perception of temporally structured, ecological stimuli connected to music: dance. In this study, we investigated whether observers extracted a visual beat when watching dance movements to assist visual timing of these movements. Participants watched silent videos of dance sequences and reproduced the movement duration by mental recall. We found better visual timing for limb movements with regular patterns in the trajectories than without, similar to the beat advantage for auditory rhythms. When movements involved both the arms and the legs, the benefit of a visual beat relied only on the latter. The beat-based advantage persisted despite auditory interferences that were temporally incongruent with the visual beat, arguing for the visual nature of these mechanisms. Our results suggest that visual timing principles for dance parallel their auditory counterparts for music, which may be based on common sensorimotor coupling. These processes likely yield multimodal rhythm representations in the scenario of music and dance.

  17. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  18. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  19. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  20. Trees and Hierarchical Structures

    CERN Document Server

    Haeseler, Arndt

    1990-01-01

    The "raison d'etre" of hierarchical dustering theory stems from one basic phe­ nomenon: This is the notorious non-transitivity of similarity relations. In spite of the fact that very often two objects may be quite similar to a third without being that similar to each other, one still wants to dassify objects according to their similarity. This should be achieved by grouping them into a hierarchy of non-overlapping dusters such that any two objects in ~ne duster appear to be more related to each other than they are to objects outside this duster. In everyday life, as well as in essentially every field of scientific investigation, there is an urge to reduce complexity by recognizing and establishing reasonable das­ sification schemes. Unfortunately, this is counterbalanced by the experience of seemingly unavoidable deadlocks caused by the existence of sequences of objects, each comparatively similar to the next, but the last rather different from the first.

  1. Auditory lateralization of conspecific and heterospecific vocalizations in cats.

    Science.gov (United States)

    Siniscalchi, Marcello; Laddago, Serena; Quaranta, Angelo

    2016-01-01

    Auditory lateralization in response to both conspecific and heterospecific vocalizations (dog vocalizations) was observed in 16 tabby cats (Felis catus). Six different vocalizations were used: cat "purring," "meowing" and "growling" and dog typical vocalizations of "disturbance," "isolation" and "play." The head-orienting paradigm showed that cats turned their head with the right ear leading (left hemisphere activation) in response to their typical-species vocalization ("meow" and "purring"); on the other hand, a clear bias in the use of the left ear (right hemisphere activation) was observed in response to vocalizations eliciting intense emotion (dogs' vocalizations of "disturbance" and "isolation"). Overall these findings suggest that auditory sensory domain seems to be lateralized also in cat species, stressing the role of the left hemisphere for intraspecific communication and of the right hemisphere in processing threatening and alarming stimuli.

  2. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2017-11-23

    The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In

  3. Auditory motion in depth is preferentially 'captured' by visual looming signals.

    Science.gov (United States)

    Harrison, Neil

    2012-01-01

    The phenomenon of crossmodal dynamic visual capture occurs when the direction of motion of a visual cue causes a weakening or reversal of the perceived direction of motion of a concurrently presented auditory stimulus. It is known that there is a perceptual bias towards looming compared to receding stimuli, and faster bimodal reaction times have recently been observed for looming cues compared to receding cues (Cappe et al., 2009). The current studies aimed to test whether visual looming cues are associated with greater dynamic capture of auditory motion in depth compared to receding signals. Participants judged the direction of an auditory motion cue presented with a visual looming cue (expanding disk), a visual receding cue (contracting disk), or visual stationary cue (static disk). Visual cues were presented either simultaneously with the auditory cue, or after 500 ms. We found increased levels of interference with looming visual cues compared to receding visual cues, compared to asynchronous presentation or stationary visual cues. The results could not be explained by the weaker subjective strength of the receding auditory stimulus, as in Experiment 2 the looming and receding auditory cues were matched for perceived strength. These results show that dynamic visual capture of auditory motion in the depth plane is modulated by an adaptive bias for looming compared to receding visual cues.

  4. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Concurrent auditory perception difficulties in older adults with right hemisphere cerebrovascular accident.

    Science.gov (United States)

    Talebi, Hossein; Moossavi, Abdollah; Faghihzadeh, Soghrat

    2014-01-01

    Older adults with cerebrovascular accident (CVA) show evidence of auditory and speech perception problems. In present study, it was examined whether these problems are due to impairments of concurrent auditory segregation procedure which is the basic level of auditory scene analysis and auditory organization in auditory scenes with competing sounds. Concurrent auditory segregation using competing sentence test (CST) and dichotic digits test (DDT) was assessed and compared in 30 male older adults (15 normal and 15 cases with right hemisphere CVA) in the same age groups (60-75 years old). For the CST, participants were presented with target message in one ear and competing message in the other one. The task was to listen to target sentence and repeat back without attention to competing sentence. For the DDT, auditory stimuli were monosyllabic digits presented dichotically and the task was to repeat those. Comparing mean score of CST and DDT between CVA patients with right hemisphere impairment and normal participants showed statistically significant difference (p=0.001 for CST and p<0.0001 for DDT). The present study revealed that abnormal CST and DDT scores of participants with right hemisphere CVA could be related to concurrent segregation difficulties. These findings suggest that low level segregation mechanisms and/or high level attention mechanisms might contribute to the problems.

  6. The effect of precision and power grips on activations in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Patrik Alexander Wikman

    2015-10-01

    Full Text Available The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC. Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks. During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.

  7. When and where of auditory spatial processing in cortex: a novel approach using electrotomography.

    Directory of Open Access Journals (Sweden)

    Jörg Lewald

    Full Text Available The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1 in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2 in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.

  8. Phase shifts in binaural stimuli provide directional cues for sound localisation in the field cricket Gryllus bimaculatus.

    Science.gov (United States)

    Seagraves, Kelly M; Hedwig, Berthold

    2014-07-01

    The cricket's auditory system is a highly directional pressure difference receiver whose function is hypothesised to depend on phase relationships between the sound waves propagating through the auditory trachea that connects the left and right hearing organs. We tested this hypothesis by measuring the effect of experimentally constructed phase shifts in acoustic stimuli on phonotactic behavior of Gryllus bimaculatus, the oscillatory response patterns of the tympanic membrane, and the activity of the auditory afferents. The same artificial calling song was played simultaneously at the left and right sides of the cricket, but one sound pattern was shifted in phase by 90 deg (carrier frequencies between 3.6 and 5.4 kHz). All three levels of auditory processing are sensitive to experimentally induced acoustic phase shifts, and the response characteristics are dependent on the carrier frequency of the sound stimulus. At lower frequencies, crickets steered away from the sound leading in phase, while tympanic membrane vibrations and auditory afferent responses were smaller when the ipsilateral sound was leading. In contrast, opposite responses were observed at higher frequencies in all three levels of auditory processing. Minimal responses occurred near the carrier frequency of the cricket's calling song, suggesting a stability at this frequency. Our results indicate that crickets may use directional cues arising from phase shifts in acoustic signals for sound localisation, and that the response properties of pressure difference receivers may be analysed with phase-shifted sound stimuli to further our understanding of how insect auditory systems are adapted for directional processing. © 2014. Published by The Company of Biologists Ltd.

  9. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  10. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli

    Science.gov (United States)

    Vliegen, Joyce; Van Esch, Thamar

    2017-01-01

    Human sound localization in the mid-saggital plane (elevation) relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners’ spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs). These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle. PMID:28333967

  11. Reconstructing spectral cues for sound localization from responses to rippled noise stimuli.

    Directory of Open Access Journals (Sweden)

    A John Van Opstal

    Full Text Available Human sound localization in the mid-saggital plane (elevation relies on an analysis of the idiosyncratic spectral shape cues provided by the head and pinnae. However, because the actual free-field stimulus spectrum is a-priori unknown to the auditory system, the problem of extracting the elevation angle from the sensory spectrum is ill-posed. Here we test different spectral localization models by eliciting head movements toward broad-band noise stimuli with randomly shaped, rippled amplitude spectra emanating from a speaker at a fixed location, while varying the ripple bandwidth between 1.5 and 5.0 cycles/octave. Six listeners participated in the experiments. From the distributions of localization responses toward the individual stimuli, we estimated the listeners' spectral-shape cues underlying their elevation percepts, by applying maximum-likelihood estimation. The reconstructed spectral cues resulted to be invariant to the considerable variation in ripple bandwidth, and for each listener they had a remarkable resemblance to the idiosyncratic head-related transfer functions (HRTFs. These results are not in line with models that rely on the detection of a single peak or notch in the amplitude spectrum, nor with a local analysis of first- and second-order spectral derivatives. Instead, our data support a model in which the auditory system performs a cross-correlation between the sensory input at the eardrum-auditory nerve, and stored representations of HRTF spectral shapes, to extract the perceived elevation angle.

  12. The Perception of Auditory Motion

    Science.gov (United States)

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  13. Auditory fMRI of Sound Intensity and Loudness for Unilateral Stimulation.

    Science.gov (United States)

    Behler, Oliver; Uppenkamp, Stefan

    2016-01-01

    We report a systematic exploration of the interrelation of sound intensity, ear of entry, individual loudness judgments, and brain activity across hemispheres, using auditory functional magnetic resonance imaging (fMRI). The stimuli employed were 4 kHz-bandpass filtered noise stimuli, presented monaurally to each ear at levels from 37 to 97 dB SPL. One diotic condition and a silence condition were included as control conditions. Normal hearing listeners completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound intensity and loudness estimates were analyzed by means of linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of sensory stimulation and perception.

  14. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  15. Hierarchical multifunctional nanocomposites

    Science.gov (United States)

    Ghasemi-Nejhad, Mehrdad N.

    2014-03-01

    properties of the fibers can also be improved by the growth of nanotubes on the fibers. The combination of the two will produce super-performing materials, not currently available. Since the improvement of fiber starts with carbon nanotube grown on micron-size fibers (and matrix with a nanomaterial) to give the macro-composite, this process is a bottom-up "hierarchical" advanced manufacturing process, and since the resulting nanocomposites will have "multifunctionality" with improve properties in various functional areas such as chemical and fire resistance, damping, stiffness, strength, fracture toughness, EMI shielding, and electrical and thermal conductivity, the resulting nanocomposites are in fact "multifunctional hierarchical nanocomposites." In this paper, the current state of knowledge in processing, performance, and characterization of these materials are addressed.

  16. Lack of multisensory integration in hemianopia: no influence of visual stimuli on aurally guided saccades to the blind hemifield.

    Directory of Open Access Journals (Sweden)

    Antonia F Ten Brink

    Full Text Available In patients with visual hemifield defects residual visual functions may be present, a phenomenon called blindsight. The superior colliculus (SC is part of the spared pathway that is considered to be responsible for this phenomenon. Given that the SC processes input from different modalities and is involved in the programming of saccadic eye movements, the aim of the present study was to examine whether multimodal integration can modulate oculomotor competition in the damaged hemifield. We conducted two experiments with eight patients who had visual field defects due to lesions that affected the retinogeniculate pathway but spared the retinotectal direct SC pathway. They had to make saccades to an auditory target that was presented alone or in combination with a visual stimulus. The visual stimulus could either be spatially coincident with the auditory target (possibly enhancing the auditory target signal, or spatially disparate to the auditory target (possibly competing with the auditory tar-get signal. For each patient we compared the saccade endpoint deviation in these two bi-modal conditions with the endpoint deviation in the unimodal condition (auditory target alone. In all seven hemianopic patients, saccade accuracy was affected only by visual stimuli in the intact, but not in the blind visual field. In one patient with a more limited quadrantano-pia, a facilitation effect of the spatially coincident visual stimulus was observed. We conclude that our results show that multisensory integration is infrequent in the blind field of patients with hemianopia.

  17. Large-scale synchronized activity during vocal deviance detection in the zebra finch auditory forebrain.

    Science.gov (United States)

    Beckers, Gabriël J L; Gahr, Manfred

    2012-08-01

    Auditory systems bias responses to sounds that are unexpected on the basis of recent stimulus history, a phenomenon that has been widely studied using sequences of unmodulated tones (mismatch negativity; stimulus-specific adaptation). Such a paradigm, however, does not directly reflect problems that neural systems normally solve for adaptive behavior. We recorded multiunit responses in the caudomedial auditory forebrain of anesthetized zebra finches (Taeniopygia guttata) at 32 sites simultaneously, to contact calls that recur probabilistically at a rate that is used in communication. Neurons in secondary, but not primary, auditory areas respond preferentially to calls when they are unexpected (deviant) compared with the same calls when they are expected (standard). This response bias is predominantly due to sites more often not responding to standard events than to deviant events. When two call stimuli alternate between standard and deviant roles, most sites exhibit a response bias to deviant events of both stimuli. This suggests that biases are not based on a use-dependent decrease in response strength but involve a more complex mechanism that is sensitive to auditory deviance per se. Furthermore, between many secondary sites, responses are tightly synchronized, a phenomenon that is driven by internal neuronal interactions rather than by the timing of stimulus acoustic features. We hypothesize that this deviance-sensitive, internally synchronized network of neurons is involved in the involuntary capturing of attention by unexpected and behaviorally potentially relevant events in natural auditory scenes.

  18. Synchronization to auditory and visual rhythms in hearing and deaf individuals

    Science.gov (United States)

    Iversen, John R.; Patel, Aniruddh D.; Nicodemus, Brenda; Emmorey, Karen

    2014-01-01

    A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported. PMID:25460395

  19. Attentional demands influence vocal compensations to pitch errors heard in auditory feedback.

    Science.gov (United States)

    Tumber, Anupreet K; Scheerer, Nichole E; Jones, Jeffery A

    2014-01-01

    Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.

  20. An attempt to improve auditory short-term memory in Down's syndrome individuals through reducing distractions.

    Science.gov (United States)

    Marcell, M M; Harvey, C F; Cothran, L P

    1988-01-01

    Down's syndrome (DS) individuals, relative to nonretarded individuals, have greater difficulty remembering brief sequences of verbal information presented auditorily. Previous research suggests at least two possible attentional explanations of their difficulty: They are especially susceptible to both auditory distraction and off-task glancing during laboratory tasks. DS, non-DS mentally retarded and nonretarded persons listened to, looked at, and attempted to remember sequences of digits. Although the three groups did not differ in their recall of visually-presented stimuli, DS subjects showed significantly poorer recall of auditorially-presented stimuli than the other two groups (which did not differ). Furthermore, the poor auditory memory of DS subjects did not improve under testing conditions designed to minimize auditory and visual distractions. It was suggested that poor auditory short-term memory for verbal information is tied more closely to Down's syndrome than to low intelligence and does not seem to be caused by a special susceptibility of Down's syndrome individuals to attentional distractors.

  1. Gated auditory speech perception: effects of listening conditions and cognitive capacity.

    Science.gov (United States)

    Moradi, Shahram; Lidestam, Björn; Saremi, Amin; Rönnberg, Jerker

    2014-01-01

    This study aimed to measure the initial portion of signal required for the correct identification of auditory speech stimuli (or isolation points, IPs) in silence and noise, and to investigate the relationships between auditory and cognitive functions in silence and noise. Twenty-one university students were presented with auditory stimuli in a gating paradigm for the identification of consonants, words, and final words in highly predictable and low predictable sentences. The Hearing in Noise Test (HINT), the reading span test, and the Paced Auditory Serial Attention Test were also administered to measure speech-in-noise ability, working memory and attentional capacities of the participants, respectively. The results showed that noise delayed the identification of consonants, words, and final words in highly predictable and low predictable sentences. HINT performance correlated with working memory and attentional capacities. In the noise condition, there were correlations between HINT performance, cognitive task performance, and the IPs of consonants and words. In the silent condition, there were no correlations between auditory and cognitive tasks. In conclusion, a combination of hearing-in-noise ability, working memory capacity, and attention capacity is needed for the early identification of consonants and words in noise.

  2. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs

    Directory of Open Access Journals (Sweden)

    Abhilash ePonnath

    2014-07-01

    Full Text Available Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds or presented on a sound-by-sound basis (ms, experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses, excitability (spikes / acoustic stimulus to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted < 2 s and, in different cells, excitability either decreased, increased or shifted in latency. Within cells, the modulatory effect of sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  3. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  4. Sound-by-sound thalamic stimulation modulates midbrain auditory excitability and relative binaural sensitivity in frogs.

    Science.gov (United States)

    Ponnath, Abhilash; Farris, Hamilton E

    2014-01-01

    Descending circuitry can modulate auditory processing, biasing sensitivity to particular stimulus parameters and locations. Using awake in vivo single unit recordings, this study tested whether electrical stimulation of the thalamus modulates auditory excitability and relative binaural sensitivity in neurons of the amphibian midbrain. In addition, by using electrical stimuli that were either longer than the acoustic stimuli (i.e., seconds) or presented on a sound-by-sound basis (ms), experiments addressed whether the form of modulation depended on the temporal structure of the electrical stimulus. Following long duration electrical stimulation (3-10 s of 20 Hz square pulses), excitability (spikes/acoustic stimulus) to free-field noise stimuli decreased by 32%, but returned over 600 s. In contrast, sound-by-sound electrical stimulation using a single 2 ms duration electrical pulse 25 ms before each noise stimulus caused faster and varied forms of modulation: modulation lasted sound-by-sound electrical stimulation varied between different acoustic stimuli, including for different male calls, suggesting modulation is specific to certain stimulus attributes. For binaural units, modulation depended on the ear of input, as sound-by-sound electrical stimulation preceding dichotic acoustic stimulation caused asymmetric modulatory effects: sensitivity shifted for sounds at only one ear, or by different relative amounts for both ears. This caused a change in the relative difference in binaural sensitivity. Thus, sound-by-sound electrical stimulation revealed fast and ear-specific (i.e., lateralized) auditory modulation that is potentially suited to shifts in auditory attention during sound segregation in the auditory scene.

  5. Analyzing the User Behavior toward Electronic Commerce Stimuli.

    Science.gov (United States)

    Lorenzo-Romero, Carlota; Alarcón-Del-Amo, María-Del-Carmen; Gómez-Borja, Miguel-Ángel

    2016-01-01

    Based on the Stimulus-Organism-Response paradigm this research analyzes the main differences between the effects of two types of web technologies: Verbal web technology (i.e., navigational structure as utilitarian stimulus) versus non-verbal web technology (music and presentation of products as hedonic stimuli). Specific webmosphere stimuli have not been examined yet as separate variables and their impact on internal and behavioral responses seems unknown. Therefore, the objective of this research consists in analyzing the impact of these web technologies -which constitute the web atmosphere or webmosphere of a website- on shopping human behavior (i.e., users' internal states -affective, cognitive, and satisfaction- and behavioral responses - approach responses, and real shopping outcomes-) within the retail online store created by computer, taking into account some mediator variables (i.e., involvement, atmospheric responsiveness, and perceived risk). A 2 ("free" versus "hierarchical" navigational structure) × 2 ("on" versus "off" music) × 2 ("moving" versus "static" images) between-subjects computer experimental design is used to test empirically this research. In addition, an integrated methodology was developed allowing the simulation, tracking and recording of virtual user behavior within an online shopping environment. As main conclusion, this study suggests that the positive responses of online consumers might increase when they are allowed to freely navigate the online stores and their experience is enriched by animate gifts and music background. The effect caused by mediator variables modifies relatively the final shopping human behavior.

  6. Analyzing the user behavior towards Electronic Commerce stimuli

    Directory of Open Access Journals (Sweden)

    Carlota Lorenzo-Romero

    2016-11-01

    Full Text Available Based on the Stimulus-Organism-Response paradigm this research analyzes the main differences between the effects of two types of web technologies: Verbal web technology (i.e. navigational structure as utilitarian stimulus versus nonverbal web technology (music and presentation of products as hedonic stimuli. Specific webmosphere stimuli have not been examined yet as separate variables and their impact on internal and behavioral responses seems unknown. Therefore, the objective of this research consists in analyzing the impact of these web technologies –which constitute the web atmosphere or webmosphere of a website– on shopping human bebaviour (i.e. users’ internal states -affective, cognitive, and satisfaction- and behavioral responses - approach responses, and real shopping outcomes- within the retail online store created by computer, taking into account some mediator variables (i.e. involvement, atmospheric responsiveness, and perceived risk. A 2(free versus hierarchical navigational structure x2(on versus off music x2(moving versus static images between-subjects computer experimental design is used to test empirically this research. In addition, an integrated methodology was developed allowing the simulation, tracking and recording of virtual user behavior within an online shopping environment. As main conclusion, this study suggests that the positive responses of online consumers might increase when they are allowed to freely navigate the online stores and their experience is enriched by animate gifts and music background. The effect caused by mediator variables modifies relatively the final shopping human behavior.

  7. Are you able not to react to what you hear? Inhibition behavior measured with an auditory Go/NoGo paradigm.

    Science.gov (United States)

    Wegmann, Elisa; Brand, Matthias; Snagowski, Jan; Schiebener, Johannes

    2017-02-01

    In everyday life people have to attend to, react to, or inhibit reactions to visual and acoustic cues. These abilities are frequently measured with Go/NoGo tasks using visual stimuli. However, these abilities have rarely been examined with auditory cues. The aims of our study (N = 106) are to develop an auditory Go/NoGo paradigm and to describe brain-healthy participants' performance. We tested convergent validity of the auditory Go/NoGo paradigm by analyzing the correlations with other neuropsychological tasks assessing attentional control and executive functions. We also analyzed the ecological validity of the task by examining correlations of self-reported impulsivity. In the first step we found that the participants are able to differentiate correctly among several sounds and also to appropriately react or inhibit a certain reaction most of the times. Convergent validity was suggested by correlations between the auditory Go/NoGo paradigm and the Color Word Interference Test, Trail Making Test, and Modified Card Sorting Test. We did not find correlations with self-reported impulsivity. Overall, the auditory Go/NoGo paradigm may be used to assess attention and inhibition in the context of auditory stimuli. Future studies may adapt the auditory Go/NoGo paradigm with specific acoustic stimuli (e.g., sound of opening a bottle) in order to address cognitive biases in particular disorders (e.g., alcohol dependence).

  8. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  9. Atypical brain responses to auditory spatial cues in adults with autism spectrum disorder.

    Science.gov (United States)

    Lodhia, Veema; Hautus, Michael J; Johnson, Blake W; Brock, Jon

    2017-09-09

    The auditory processing atypicalities experienced by many individuals on the autism spectrum disorder might be understood in terms of difficulties parsing the sound energy arriving at the ears into discrete auditory 'objects'. Here, we asked whether autistic adults are able to make use of two important spatial cues to auditory object formation - the relative timing and amplitude of sound energy at the left and right ears. Using electroencephalography, we measured the brain responses of 15 autistic adults and 15 age- and verbal-IQ-matched control participants as they listened to dichotic pitch stimuli - white noise stimuli in which interaural timing or amplitude differences applied to a narrow frequency band of noise typically lead to the perception of a pitch sound that is spatially segregated from the noise. Responses were contrasted with those to stimuli in which timing and amplitude cues were removed. Consistent with our previous studies, autistic adults failed to show a significant object-related negativity (ORN) for timing-based pitch, although their ORN was not significantly smaller than that of the control group. Autistic participants did show an ORN to amplitude cues, indicating that they do not experience a general impairment in auditory object formation. However, their P400 response - thought to indicate the later attention-dependent aspects of auditory object formation - was missing. These findings provide further evidence of atypical auditory object processing in autism with potential implications for understanding the perceptual and communication difficulties associated with the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    Directory of Open Access Journals (Sweden)

    David B. Stone

    2014-11-01

    Full Text Available Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46 and healthy controls (N = 57 using magnetoencephalography (MEG. Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia.

  11. Depersonalization disorder: disconnection of cognitive evaluation from autonomic responses to emotional stimuli.

    Science.gov (United States)

    Michal, Matthias; Koechel, Ansgar; Canterino, Marco; Adler, Julia; Reiner, Iris; Vossel, Gerhard; Beutel, Manfred E; Gamer, Matthias

    2013-01-01

    Patients with depersonalization disorder (DPD) typically complain about emotional detachment. Previous studies found reduced autonomic responsiveness to emotional stimuli for DPD patients as compared to patients with anxiety disorders. We aimed to investigate autonomic responsiveness to emotional auditory stimuli of DPD patients as compared to patient controls. Furthermore, we examined the modulatory effect of mindful breathing on these responses as well as on depersonalization intensity. 22 DPD patients and 15 patient controls balanced for severity of depression and anxiety, age, sex and education, were compared regarding 1) electrodermal and heart rate data during a resting period, and 2) autonomic responses and cognitive appraisal of standardized acoustic affective stimuli in two conditions (normal listening and mindful breathing). DPD patients rated the emotional sounds as significantly more neutral as compared to patient controls and standardized norm ratings. At the same time, however, they responded more strongly to acoustic emotional stimuli and their electrodermal response pattern was more modulated by valence and arousal as compared to patient controls. Mindful breathing reduced severity of depersonalization in DPD patients and increased the arousal modulation of electrodermal responses in the whole sample. Finally, DPD patients showed an increased electrodermal lability in the rest period as compared to patient controls. These findings demonstrated that the cognitive evaluation of emotional sounds in DPD patients is disconnected from their autonomic responses to those emotional stimuli. The increased electrodermal lability in DPD may reflect increased introversion and cognitive control of emotional impulses. The findings have important psychotherapeutic implications.

  12. Depersonalization disorder: disconnection of cognitive evaluation from autonomic responses to emotional stimuli.

    Directory of Open Access Journals (Sweden)

    Matthias Michal

    Full Text Available BACKGROUND: Patients with depersonalization disorder (DPD typically complain about emotional detachment. Previous studies found reduced autonomic responsiveness to emotional stimuli for DPD patients as compared to patients with anxiety disorders. We aimed to investigate autonomic responsiveness to emotional auditory stimuli of DPD patients as compared to patient controls. Furthermore, we examined the modulatory effect of mindful breathing on these responses as well as on depersonalization intensity. METHODS: 22 DPD patients and 15 patient controls balanced for severity of depression and anxiety, age, sex and education, were compared regarding 1 electrodermal and heart rate data during a resting period, and 2 autonomic responses and cognitive appraisal of standardized acoustic affective stimuli in two conditions (normal listening and mindful breathing. RESULTS: DPD patients rated the emotional sounds as significantly more neutral as compared to patient controls and standardized norm ratings. At the same time, however, they responded more strongly to acoustic emotional stimuli and their electrodermal response pattern was more modulated by valence and arousal as compared to patient controls. Mindful breathing reduced severity of depersonalization in DPD patients and increased the arousal modulation of electrodermal responses in the whole sample. Finally, DPD patients showed an increased electrodermal lability in the rest period as compared to patient controls. CONCLUSIONS: These findings demonstrated that the cognitive evaluation of emotional sounds in DPD patients is disconnected from their autonomic responses to those emotional stimuli. The increased electrodermal lability in DPD may reflect increased introversion and cognitive control of emotional impulses. The findings have important psychotherapeutic implications.

  13. Multisensory stimuli elicit altered oscillatory brain responses at gamma frequencies in patients with schizophrenia

    Science.gov (United States)

    Stone, David B.; Coffman, Brian A.; Bustillo, Juan R.; Aine, Cheryl J.; Stephen, Julia M.

    2014-01-01

    Deficits in auditory and visual unisensory responses are well documented in patients with schizophrenia; however, potential abnormalities elicited from multisensory audio-visual stimuli are less understood. Further, schizophrenia patients have shown abnormal patterns in task-related and task-independent oscillatory brain activity, particularly in the gamma frequency band. We examined oscillatory responses to basic unisensory and multisensory stimuli in schizophrenia patients (N = 46) and healthy controls (N = 57) using magnetoencephalography (MEG). Time-frequency decomposition was performed to determine regions of significant changes in gamma band power by group in response to unisensory and multisensory stimuli relative to baseline levels. Results showed significant behavioral differences between groups in response to unisensory and multisensory stimuli. In addition, time-frequency analysis revealed significant decreases and increases in gamma-band power in schizophrenia patients relative to healthy controls, which emerged both early and late over both sensory and frontal regions in response to unisensory and multisensory stimuli. Unisensory gamma-band power predicted multisensory gamma-band power differently by group. Furthermore, gamma-band power in these regions predicted performance in select measures of the Measurement and Treatment Research to Improve Cognition in Schizophrenia (MATRICS) test battery differently by group. These results reveal a unique pattern of task-related gamma-band power in schizophrenia patients relative to controls that may indicate reduced inhibition in combination with impaired oscillatory mechanisms in patients with schizophrenia. PMID:25414652

  14. Hierarchical Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Di Lu

    2018-01-01

    Full Text Available The Internet of Things (IoT generates lots of high-dimensional sensor intelligent data. The processing of high-dimensional data (e.g., data visualization and data classification is very difficult, so it requires excellent subspace learning algorithms to learn a latent subspace to preserve the intrinsic structure of the high-dimensional data, and abandon the least useful information in the subsequent processing. In this context, many subspace learning algorithms have been presented. However, in the process of transforming the high-dimensional data into the low-dimensional space, the huge difference between the sum of inter-class distance and the sum of intra-class distance for distinct data may cause a bias problem. That means that the impact of intra-class distance is overwhelmed. To address this problem, we propose a novel algorithm called Hierarchical Discriminant Analysis (HDA. It minimizes the sum of intra-class distance first, and then maximizes the sum of inter-class distance. This proposed method balances the bias from the inter-class and that from the intra-class to achieve better performance. Extensive experiments are conducted on several benchmark face datasets. The results reveal that HDA obtains better performance than other dimensionality reduction algorithms.

  15. Integration of auditory and kinesthetic information in motion: alterations in Parkinson's disease.

    Science.gov (United States)

    Sabaté, Magdalena; Llanos, Catalina; Rodríguez, Manuel

    2008-07-01

    The main aim in this work was to study the interaction between auditory and kinesthetic stimuli and its influence on motion control. The study was performed on healthy subjects and patients with Parkinson's disease (PD). Thirty-five right-handed volunteers (young, PD, and age-matched healthy participants, and PD-patients) were studied with three different motor tasks (slow cyclic movements, fast cyclic movements, and slow continuous movements) and under the action of kinesthetic stimuli and sounds at different beat rates. The action of kinesthesia was evaluated by comparing real movements with virtual movements (movements imaged but not executed). The fast cyclic task was accelerated by kinesthetic but not by auditory stimuli. The slow cyclic task changed with the beat rate of sounds but not with kinesthetic stimuli. The slow continuous task showed an integrated response to both sensorial modalities. These data show that the influence of the multisensory integration on motion changes with the motor task and that some motor patterns are modulated by the simultaneous action of auditory and kinesthetic information, a cross-modal integration that was different in PD-patients. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  16. Mood modulates auditory laterality of hemodynamic mismatch responses during dichotic listening.

    Directory of Open Access Journals (Sweden)

    Lisa Schock

    Full Text Available Hemodynamic mismatch responses can be elicited by deviant stimuli in a sequence of standard stimuli even during cognitive demanding tasks. Emotional context is known to modulate lateralized processing. Right-hemispheric negative emotion processing may bias attention to the right and enhance processing of right-ear stimuli. The present study examined the influence of induced mood on lateralized pre-attentive auditory processing of dichotic stimuli using functional magnetic resonance imaging (fMRI. Faces expressing emotions (sad/happy/neutral were presented in a blocked design while a dichotic oddball sequence with consonant-vowel (CV syllables in an event-related design was simultaneously administered. Twenty healthy participants were instructed to feel the emotion perceived on the images and to ignore the syllables. Deviant sounds reliably activated bilateral auditory cortices and confirmed attention effects by modulation of visual activity. Sad mood induction activated visual, limbic and right prefrontal areas. A lateralization effect of emotion-attention interaction was reflected in a stronger response to right-ear deviants in the right auditory cortex during sad mood. This imbalance of resources may be a neurophysiological correlate of laterality in sad mood and depression. Conceivably, the compensatory right-hemispheric enhancement of resources elicits increased ipsilateral processing.

  17. Classification across the senses: Auditory-visual cognitive performance in a California sea lion (Zalophus californianus)

    Science.gov (United States)

    Lindemann, Kristy L.; Reichmuth-Kastak, Colleen; Schusterman, Ronald J.

    2005-09-01

    The model of stimulus equivalence describes how perceptually dissimilar stimuli can become interrelated to form useful categories both within and between the sensory modalities. A recent experiment expanded upon prior work with a California sea lion by examining stimulus classification across the auditory and visual modalities. Acoustic stimuli were associated with an exemplar from one of two pre-existing visual classes in a matching-to-sample paradigm. After direct training of these associations, the sea lion showed spontaneous transfer of the new auditory stimuli to the remaining members of the visual classes. The sea lion's performance on this cross-modal equivalence task was similar to that shown by human subjects in studies of emergent word learning and reading comprehension. Current research with the same animal further examines how stimulus classes can be expanded across modalities. Fast-mapping techniques are used to rapidly establish new auditory-visual relationships between acoustic cues and multiple arbitrary visual stimuli. Collectively, this research illustrates complex cross-modal performances in a highly experienced subject and provides insight into how animals organize information from multiple sensory modalities into meaningful representations.

  18. Comparable mechanisms of working memory interference by auditory and visual motion in youth and aging.

    Science.gov (United States)

    Mishra, Jyoti; Zanto, Theodore; Nilakantan, Aneesha; Gazzaley, Adam

    2013-08-01

    Intrasensory interference during visual working memory (WM) maintenance by object stimuli (such as faces and scenes), has been shown to negatively impact WM performance, with greater detrimental impacts of interference observed in aging. Here we assessed age-related impacts by intrasensory WM interference from lower-level stimulus features such as visual and auditory motion stimuli. We consistently found that interference in the form of ignored distractions and secondary task interruptions presented during a WM maintenance period, degraded memory accuracy in both the visual and auditory domain. However, in contrast to prior studies assessing WM for visual object stimuli, feature-based interference effects were not observed to be significantly greater in older adults. Analyses of neural oscillations in the alpha frequency band further revealed preserved mechanisms of interference processing in terms of post-stimulus alpha suppression, which was observed maximally for secondary task interruptions in visual and auditory modalities in both younger and older adults. These results suggest that age-related sensitivity of WM to interference may be limited to complex object stimuli, at least at low WM loads. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Do infants find snakes aversive? Infants' physiological responses to "fear-relevant" stimuli.

    Science.gov (United States)

    Thrasher, Cat; LoBue, Vanessa

    2016-02-01

    In the current research, we sought to measure infants' physiological responses to snakes-one of the world's most widely feared stimuli-to examine whether they find snakes aversive or merely attention grabbing. Using a similar method to DeLoache and LoBue (Developmental Science, 2009, Vol. 12, pp. 201-207), 6- to 9-month-olds watched a series of multimodal (both auditory and visual) stimuli: a video of a snake (fear-relevant) or an elephant (non-fear-relevant) paired with either a fearful or happy auditory track. We measured physiological responses to the pairs of stimuli, including startle magnitude, latency to startle, and heart rate. Results suggest that snakes capture infants' attention; infants showed the fastest startle responses and lowest average heart rate to the snakes, especially when paired with a fearful voice. Unexpectedly, they also showed significantly reduced startle magnitude during this same snake video plus fearful voice combination. The results are discussed with respect to theoretical perspectives on fear acquisition. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  1. A study of auditory preferences in nonhandicapped infants and infants with Down's syndrome.

    Science.gov (United States)

    Glenn, S M; Cunningham, C C; Joyce, P F

    1981-01-01

    11 infants with Down's syndrome (MA 9.2 months, CA 12.7 months) and 10 of 11 nonhandicapped infants (MA 9.6 months, CA 9.3 months) demonstrated that they could operate an automated device which enabled them to choose to listen to 1 of a pair of auditory signals. All subjects showed preferential responding. Both groups of infants showed a significant preference for nursery rhymes sung by a female voice rather than played on musical instruments. The infants with Down's syndrome had much longer response durations for the more complex auditory stimuli. The apparatus provides a useful technique for studying language development in both normal and abnormal populations.

  2. Multi-sensory integration in brainstem and auditory cortex.

    Science.gov (United States)

    Basura, Gregory J; Koehler, Seth D; Shore, Susan E

    2012-11-16

    Tinnitus is the perception of sound in the absence of a physical sound stimulus. It is thought to arise from aberrant neural activity within central auditory pathways that may be influenced by multiple brain centers, including the somatosensory system. Auditory-somatosensory (bimodal) integration occurs in the dorsal cochlear nucleus (DCN), where electrical activation of somatosensory regions alters pyramidal cell spike timing and rates of sound stimuli. Moreover, in conditions of tinnitus, bimodal integration in DCN is enhanced, producing greater spontaneous and sound-driven neural activity, which are neural correlates of tinnitus. In primary auditory cortex (A1), a similar auditory-somatosensory integration has been described in the normal system (Lakatos et al., 2007), where sub-threshold multisensory modulation may be a direct reflection of subcortical multisensory responses (Tyll et al., 2011). The present work utilized simultaneous recordings from both DCN and A1 to directly compare bimodal integration across these separate brain stations of the intact auditory pathway. Four-shank, 32-channel electrodes were placed in DCN and A1 to simultaneously record tone-evoked unit activity in the presence and absence of spinal trigeminal nucleus (Sp5) electrical activation. Bimodal stimulation led to long-lasting facilitation or suppression of single and multi-unit responses to subsequent sound in both DCN and A1. Immediate (bimodal response) and long-lasting (bimodal plasticity) effects of Sp5-tone stimulation were facilitation or suppression of tone-evoked firing rates in DCN and A1 at all Sp5-tone pairing intervals (10, 20, and 40 ms), and greater suppression at 20 ms pairing-intervals for single unit responses. Understanding the complex relationships between DCN and A1 bimodal processing in the normal animal provides the basis for studying its disruption in hearing loss and tinnitus models. This article is part of a Special Issue entitled: Tinnitus Neuroscience

  3. Children's auditory working memory performance in degraded listening conditions.

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R

    2014-08-01

    The objectives of this study were to determine (a) whether school-age children with typical hearing demonstrate poorer auditory working memory performance in multitalker babble at degraded signal-to-noise ratios than in quiet; and (b) whether the amount of cognitive demand of the task contributed to differences in performance in noise. It was hypothesized that stressing the working memory system with the presence of noise would impede working memory processes in real time and result in poorer working memory performance in degraded conditions. Twenty children with typical hearing between 8 and 10 years old were tested using 4 auditory working memory tasks (Forward Digit Recall, Backward Digit Recall, Listening Recall Primary, and Listening Recall Secondary). Stimuli were from the standardized Working Memory Test Battery for Children. Each task was administered in quiet and in 4-talker babble noise at 0 dB and -5 dB signal-to-noise ratios. Children's auditory working memory performance was systematically decreased in the presence of multitalker babble noise compared with quiet. Differences between low-complexity and high-complexity tasks were observed, with children performing more poorly on tasks with greater storage and processing demands. There was no interaction between noise and complexity of task. All tasks were negatively impacted similarly by the addition of noise. Auditory working memory performance was negatively impacted by the presence of multitalker babble noise. Regardless of complexity of task, noise had a similar effect on performance. These findings suggest that the addition of noise inhibits auditory working memory processes in real time for school-age children.

  4. Delays in auditory processing identified in preschool children with FASD.

    Science.gov (United States)

    Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A

    2012-10-01

    Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.

  5. Auditory hallucinations treated by radio headphones.

    Science.gov (United States)

    Feder, R

    1982-09-01

    A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.

  6. Hierarchical clustering for graph visualization

    CERN Document Server

    Clémençon, Stéphan; Rossi, Fabrice; Tran, Viet Chi

    2012-01-01

    This paper describes a graph visualization methodology based on hierarchical maximal modularity clustering, with interactive and significant coarsening and refining possibilities. An application of this method to HIV epidemic analysis in Cuba is outlined.

  7. Direct hierarchical assembly of nanoparticles

    Science.gov (United States)

    Xu, Ting; Zhao, Yue; Thorkelsson, Kari

    2014-07-22

    The present invention provides hierarchical assemblies of a block copolymer, a bifunctional linking compound and a nanoparticle. The block copolymers form one micro-domain and the nanoparticles another micro-domain.

  8. Hierarchical materials: Background and perspectives

    DEFF Research Database (Denmark)

    2016-01-01

    Hierarchical design draws inspiration from analysis of biological materials and has opened new possibilities for enhancing performance and enabling new functionalities and extraordinary properties. With the development of nanotechnology, the necessary technological requirements for the manufactur...

  9. The representation of level and loudness in the central auditory system for unilateral stimulation.

    Science.gov (United States)

    Behler, Oliver; Uppenkamp, Stefan

    2016-10-01

    Loudness is the perceptual correlate of the physical intensity of a sound. However, loudness judgments depend on a variety of other variables and can vary considerably between individual listeners. While functional magnetic resonance imaging (fMRI) has been extensively used to characterize the neural representation of physical sound intensity in the human auditory system, only few studies have also investigated brain activity in relation to individual loudness. The physiological correlate of loudness perception is not yet fully understood. The present study systematically explored the interrelation of sound pressure level, ear of entry, individual loudness judgments, and fMRI activation along different stages of the central auditory system and across hemispheres for a group of normal hearing listeners. 4-kHz-bandpass filtered noise stimuli were presented monaurally to each ear at levels from 37 to 97dB SPL. One diotic condition and a silence condition were included as control conditions. The participants completed a categorical loudness scaling procedure with similar stimuli before auditory fMRI was performed. The relationship between brain activity, as inferred from blood oxygenation level dependent (BOLD) contrasts, and both sound level and loudness estimates were analyzed by means of functional activation maps and linear mixed effects models for various anatomically defined regions of interest in the ascending auditory pathway and in the cortex. Our findings are overall in line with the notion that fMRI activation in several regions within auditory cortex as well as in certain stages of the ascending auditory pathway might be more a direct linear reflection of perceived loudness rather than of sound pressure level. The results indicate distinct functional differences between midbrain and cortical areas as well as between specific regions within auditory cortex, suggesting a systematic hierarchy in terms of lateralization and the representation of level and

  10. Auditory-Verbal Comprehension Development of 2-5 Year Old Normal Persian Speaking Children in Tehran, Iran

    Directory of Open Access Journals (Sweden)

    Fariba Yadegari

    2011-06-01

    Full Text Available Background and Aim: Understanding and defining developmental norms of auditory comprehension is a necessity for detecting auditory-verbal comprehension impairments in children. We hereby investigated lexical auditory development of Persian (Farsi speaking children.Methods: In this cross-sectional study, auditory comprehension of four 2-5 year old normal children of adult’s child-directed utterance at available nurseries was observed by researchers primarily to gain a great number of comprehendible words for the children of the same age. The words were classified into nouns, verbs and adjectives. Auditory-verbal comprehension task items were also considered in 2 sections of subordinates and superordinates auditory comprehension. Colored pictures were provided for each item. Thirty 2-5 year old normal children were randomly selected from nurseries all over Tehran. Children were tested by this task and subsequently, mean of their correct response were analyzed. Results: The findings revealed that there is a high positive correlation between auditory-verbal comprehension and age (r=0.804, p=0.001. Comparing children in 3 age groups of 2-3, 3-4 and 4-5 year old, showed that subordinate and superordinate auditory comprehension of the former group is significantly lower (p0.05, while the difference between subordinate and superordinate auditory comprehension was significant in all age groups (p<0.05.Conclusion: Auditory-verbal comprehension develop much faster at lower than older ages and there is no prominent difference between word linguistic classes including nouns, verbs and adjectives. Slower development of superordinate auditory comprehension implies semantic hierarchical evolution of words.

  11. Phase shift of sinusoidally alternating colored stimuli

    NARCIS (Netherlands)

    Walraven, P.L.; Leebeek, H.J.

    1964-01-01

    In order to avoid luminance flicker at equal luminance of two alternating colored stimuli de Lange found that a phase shift of the stimuli with respect to each other has to be introduced. This compensation for the phase shift occurring in the retina-cortex system has been measured for a large number

  12. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  13. The Power of Auditory-Motor Synchronization in Sports: Enhancing Running Performance by Coupling Cadence with the Right Beats

    Science.gov (United States)

    Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn

    2013-01-01

    Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants’ cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants’ cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli –which was most salient during the metronome condition– helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner’s cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by

  14. Neural responses in songbird forebrain reflect learning rates, acquired salience, and stimulus novelty after auditory discrimination training.

    Science.gov (United States)

    Bell, Brittany A; Phan, Mimi L; Vicario, David S

    2015-03-01

    How do social interactions form and modulate the neural representations of specific complex signals? This question can be addressed in the songbird auditory system. Like humans, songbirds learn to vocalize by imitating tutors heard during development. These learned vocalizations are important in reproductive and social interactions and in individual recognition. As a model for the social reinforcement of particular songs, male zebra finches were trained to peck for a food reward in response to one song stimulus (GO) and to withhold responding for another (NoGO). After performance reached criterion, single and multiunit neural responses to both trained and novel stimuli were obtained from multiple electrodes inserted bilaterally into two songbird auditory processing areas [caudomedial mesopallium (CMM) and caudomedial nidopallium (NCM)] of awake, restrained birds. Neurons in these areas undergo stimulus-specific adaptation to repeated song stimuli, and responses to familiar stimuli adapt more slowly than to novel stimuli. The results show that auditory responses differed in NCM and CMM for trained (GO and NoGO) stimuli vs. novel song stimuli. When subjects were grouped by the number of training days required to reach criterion, fast learners showed larger neural responses and faster stimulus-specific adaptation to all stimuli than slow learners in both areas. Furthermore, responses in NCM of fast learners were more strongly left-lateralized than in slow learners. Thus auditory responses in these sensory areas not only encode stimulus familiarity, but also reflect behavioral reinforcement in our paradigm, and can potentially be modulated by social interactions. Copyright © 2015 the American Physiological Society.

  15. The power of auditory-motor synchronization in sports: enhancing running performance by coupling cadence with the right beats.

    Science.gov (United States)

    Bood, Robert Jan; Nijssen, Marijn; van der Kamp, John; Roerdink, Melvyn

    2013-01-01

    Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization) may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1) a control condition without acoustic stimuli, 2) a metronome condition with a sequence of beeps matching participants' cadence (synchronization), and 3) a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation). Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-)maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music) and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps). These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by auditory

  16. The power of auditory-motor synchronization in sports: enhancing running performance by coupling cadence with the right beats.

    Directory of Open Access Journals (Sweden)

    Robert Jan Bood

    Full Text Available Acoustic stimuli, like music and metronomes, are often used in sports. Adjusting movement tempo to acoustic stimuli (i.e., auditory-motor synchronization may be beneficial for sports performance. However, music also possesses motivational qualities that may further enhance performance. Our objective was to examine the relative effects of auditory-motor synchronization and the motivational impact of acoustic stimuli on running performance. To this end, 19 participants ran to exhaustion on a treadmill in 1 a control condition without acoustic stimuli, 2 a metronome condition with a sequence of beeps matching participants' cadence (synchronization, and 3 a music condition with synchronous motivational music matched to participants' cadence (synchronization+motivation. Conditions were counterbalanced and measurements were taken on separate days. As expected, time to exhaustion was significantly longer with acoustic stimuli than without. Unexpectedly, however, time to exhaustion did not differ between metronome and motivational music conditions, despite differences in motivational quality. Motivational music slightly reduced perceived exertion of sub-maximal running intensity and heart rates of (near-maximal running intensity. The beat of the stimuli -which was most salient during the metronome condition- helped runners to maintain a consistent pace by coupling cadence to the prescribed tempo. Thus, acoustic stimuli may have enhanced running performance because runners worked harder as a result of motivational aspects (most pronounced with motivational music and more efficiently as a result of auditory-motor synchronization (most notable with metronome beeps. These findings imply that running to motivational music with a very prominent and consistent beat matched to the runner's cadence will likely yield optimal effects because it helps to elevate physiological effort at a high perceived exertion, whereas the consistent and correct cadence induced by

  17. Air and Bone Conduction Frequency-specific Auditory Brainstem Response in Children with Agenesis of the External Auditory Canal.

    Science.gov (United States)

    Sleifer, Pricila; Didoné, Dayane Domeneghini; Keppeler, Ísis Bicca; Bueno, Claudine Devicari; Riesgo, Rudimar Dos Santos

    2017-10-01

    Introduction  The tone-evoked auditory brainstem responses (tone-ABR) enable the differential diagnosis in the evaluation of children until 12 months of age, including those with external and/or middle ear malformations. The use of auditory stimuli with frequency specificity by air and bone conduction allows characterization of hearing profile. Objective  The objective of our study was to compare the results obtained in tone-ABR by air and bone conduction in children until 12 months, with agenesis of the external auditory canal. Method  The study was cross-sectional, observational, individual, and contemporary. We conducted the research with tone-ABR by air and bone conduction in the frequencies of 500 Hz and 2000 Hz in 32 children, 23 boys, from one to 12 months old, with agenesis of the external auditory canal. Results  The tone-ABR thresholds were significantly elevated for air conduction in the frequencies of 500 Hz and 2000 Hz, while the thresholds of bone conduction had normal values in both ears. We found no statistically significant difference between genders and ears for most of the comparisons. Conclusion  The thresholds obtained by bone conduction did not alter the thresholds in children with conductive hearing loss. However, the conductive hearing loss alter all thresholds by air conduction. The tone-ABR by bone conduction is an important tool for assessing cochlear integrity in children with agenesis of the external auditory canal under 12 months.

  18. Analysis of spatiotemporal pattern correction using a computational model of the auditory periphery.

    Science.gov (United States)

    Zeyl, Timothy J; Bruce, Ian C

    2014-01-01

    The purpose of this study was to determine the cause of poor experimental performance of a spatiotemporal pattern correction (SPC) scheme that has been proposed as a hearing aid algorithm and to determine contexts in which it may provide benefit. The SPC scheme is intended to compensate for altered phase response and group delay differences in the auditory nerve spiking patterns in impaired ears. Based on theoretical models of loudness and the hypothesized importance of temporal fine structure for intelligibility, the compensations of the SPC scheme are expected to provide benefit; however, preliminary experiments revealed that listeners preferred unprocessed or minimally processed speech as opposed to complete SPC processed speech. An improved version of the SPC scheme was evaluated with a computational auditory model in response to a synthesized vowel at multiple SPLs. The impaired model auditory nerve response to SPC-aided stimuli was compared to the unaided stimuli for spectrotemporal response similarity to the healthy auditory model. This comparison included analysis of synchronized rate across auditory nerve characteristic frequencies and a measure of relative phase response of auditory nerve fibers to complex stimuli derived from cross-correlations. Analysis indicates that SPC can improve a metric of relative phase response at low SPLs, but may do so at the cost of decreased spectrotemporal response similarity to the healthy auditory model and degraded synchrony to vowel formants. In-depth analysis identifies several technical and conceptual problems associated with SPC that need to be addressed. These include the following: (1) a nonflat frequency response through the analysis-synthesis filterbank that results from time-varying changes in the relative temporal alignment of filterbank channels, (2) group delay corrections that are based on incorrect frequencies because of spread of synchrony in auditory nerve responses, and (3) frequency modulations in the

  19. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    Science.gov (United States)

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  20. McGurk stimuli for the investigation of multisensory integration in cochlear implant users: The Oldenburg Audio Visual Speech Stimuli (OLAVS).

    Science.gov (United States)

    Stropahl, Maren; Schellhardt, Sebastian; Debener, Stefan

    2017-06-01

    The concurrent presentation of different auditory and visual syllables may result in the perception of a third syllable, reflecting an illusory fusion of visual and auditory information. This well-known McGurk effect is frequently used for the study of audio-visual integration. Recently, it was shown that the McGurk effect is strongly stimulus-dependent, which complicates comparisons across perceivers and inferences across studies. To overcome this limitation, we developed the freely available Oldenburg audio-visual speech stimuli (OLAVS), consisting of 8 different talkers and 12 different syllable combinations. The quality of the OLAVS set was evaluated with 24 normal-hearing subjects. All 96 stimuli were characterized based on their stimulus disparity, which was obtained from a probabilistic model (cf. Magnotti & Beauchamp, 2015). Moreover, the McGurk effect was studied in eight adult cochlear implant (CI) users. By applying the individual, stimulus-independent parameters of the probabilistic model, the predicted effect of stronger audio-visual integration in CI users could be confirmed, demonstrating the validity of the new stimulus material.

  1. Predictive uncertainty in auditory sequence processing

    Directory of Open Access Journals (Sweden)

    Niels Chr. eHansen

    2014-09-01

    Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  2. Predictive uncertainty in auditory sequence processing.

    Science.gov (United States)

    Hansen, Niels Chr; Pearce, Marcus T

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.

  3. Predictive uncertainty in auditory sequence processing

    Science.gov (United States)

    Hansen, Niels Chr.; Pearce, Marcus T.

    2014-01-01

    Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018

  4. The Relationship between Brainstem Temporal Processing and Performance on Tests of Central Auditory Function in Children with Reading Disorders

    Science.gov (United States)

    Billiet, Cassandra R.; Bellis, Teri James

    2011-01-01

    Purpose: Studies using speech stimuli to elicit electrophysiologic responses have found approximately 30% of children with language-based learning problems demonstrate abnormal brainstem timing. Research is needed regarding how these responses relate to performance on behavioral tests of central auditory function. The purpose of the study was to…

  5. EEG derivations providing auditory steady-state responses with high signal-to-noise ratios in infants.

    NARCIS (Netherlands)

    Reijden, C.S. van der; Mens, L.H.M.; Snik, A.F.M.

    2005-01-01

    OBJECTIVE: To identify EEG derivations that yield high signal-to-noise ratios (SNRs) of the auditory steady-state response (ASSR) in infants aged 0 to 5 months. DESIGN: The ASSR was recorded simultaneously from 10 EEG derivations in a monopolar montage in 20 sleeping infants. Stimuli were tones of

  6. Signal-to-noise ratios of the auditory steady-state response from fifty-five EEG derivations in adults.

    NARCIS (Netherlands)

    Reijden, C.S. van der; Mens, L.H.M.; Snik, A.F.M.

    2004-01-01

    The Auditory Steady-State Response (ASSR) was recorded in 20 awake adults with normal hearing on ten EEG channels simultaneously to find derivations with the best signal-to-noise ratios (SNRs). Stimuli were 20% frequency modulated tones of 0.5 and 2 kHz at 20 dB SL, 100% amplitude modulated at 90 or

  7. Dopamine and noradrenaline efflux in the rat prefrontal cortex after classical aversive conditioning to an auditory cue

    NARCIS (Netherlands)

    Feenstra, M. G.; Vogel, M.; Botterblom, M. H.; Joosten, R. N.; de Bruin, J. P.

    2001-01-01

    We used bilateral microdialysis in the medial prefrontal cortex (PFC) of awake, freely moving rats to study aversive conditioning to an auditory cue in the controlled environment of the Skinner box. The presentation of the explicit conditioned stimuli (CS), previously associated with foot shocks,

  8. Auditory Processing Training in Learning Disability

    OpenAIRE

    Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr

    2006-01-01

    The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...

  9. Short-term plasticity in the auditory system: differential neural responses to perception and imagery of speech and music.

    Science.gov (United States)

    Meyer, Martin; Elmer, Stefan; Baumann, Simon; Jancke, Lutz

    2007-01-01

    In this EEG study we sought to examine the neuronal underpinnings of short-term plasticity as a top-down guided auditory learning process. We hypothesized, that (i) auditory imagery should elicit proper auditory evoked effects (N1/P2 complex) and a late positive component (LPC). Generally, based on recent human brain mapping studies we expected (ii) to observe the involvement of different temporal and parietal lobe areas in imagery and in perception of acoustic stimuli. Furthermore we predicted (iii) that temporal regions show an asymmetric trend due to the different specialization of the temporal lobes in processing speech and non-speech sounds. Finally we sought evidence supporting the notion that short-term training is sufficient to drive top-down activity in brain regions that are not normally recruited by sensory induced bottom up processing. 18 non-musicians partook in a 30 channels based EEG session that investigated spatio-temporal dynamics of auditory imagery of "consonant-vowel" (CV) syllables and piano triads. To control for conditioning effects, we split the volunteers in two matched groups comprising the same conditions (visual, auditory or bimodal stimulation) presented in a slightly different serial order. Furthermore the study presents electromagnetic source localization (LORETA) of perception and imagery of CV- and piano stimuli. Our results imply that auditory imagery elicited similar electrophysiological effects at an early stage (N1/P2) as auditory stimulation. However, we found an additional LPC following the N1/P2 for auditory imagery only. Source estimation evinced bilateral engagement of anterior temporal cortex, which was generally stronger for imagery of music relative to imagery of speech. While we did not observe lateralized activity for the imagery of syllables we noted significantly increased rightward activation over the anterior supratemporal plane for musical imagery. Thus, we conclude that short-term top-down training based

  10. Binocular coordination in response to stereoscopic stimuli

    Science.gov (United States)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  11. Brainstem encoding of speech and musical stimuli in congenital amusia: Evidence from Cantonese speakers

    Directory of Open Access Journals (Sweden)

    Fang eLiu

    2015-01-01

    Full Text Available Congenital amusia is a neurodevelopmental disorder of musical processing that also impacts subtle aspects of speech processing. It remains debated at what stage(s of auditory processing deficits in amusia arise. In this study, we investigated whether amusia originates from impaired subcortical encoding of speech (in quiet and noise and musical sounds in the brainstem. Fourteen Cantonese-speaking amusics and 14 matched controls passively listened to six Cantonese lexical tones in quiet, two Cantonese tones in noise (signal-to-noise ratios at 0 and 20 dB, and two cello tones in quiet while their frequency-following responses (FFRs to these tones were recorded. All participants also completed a behavioral lexical tone identification task. The results indicated normal brainstem encoding of pitch in speech (in quiet and noise and musical stimuli in amusics relative to controls, as measured by FFR pitch strength, pitch error, and stimulus-to-response correlation. There was also no group difference in neural conduction time or FFR amplitudes. Both groups demonstrated better FFRs to speech (in quiet and noise than to musical stimuli. However, a significant group difference was observed for tone identification, with amusics showing significantly lower accuracy than controls. Analysis of the tone confusion matrices suggested that amusics were more likely than controls to confuse between tones that shared similar acoustic features. Interestingly, this deficit in lexical tone identification was not coupled with brainstem abnormality for either speech or musical stimuli. Together, our results suggest that the amusic brainstem is not functioning abnormally, although higher-order linguistic pitch processing is impaired in amusia. This finding has significant implications for theories of central auditory processing, requiring further investigations into how different stages of auditory processing interact in the human brain.

  12. The unity assumption facilitates cross-modal binding of musical, non-speech stimuli: The role of spectral and amplitude envelope cues.

    Science.gov (United States)

    Chuen, Lorraine; Schutz, Michael

    2016-07-01

    An observer's inference that multimodal signals originate from a common underlying source facilitates cross-modal binding. This 'unity assumption' causes asynchronous auditory and visual speech streams to seem simultaneous (Vatakis & Spence, Perception & Psychophysics, 69(5), 744-756, 2007). Subsequent tests of non-speech stimuli such as musical and impact events found no evidence for the unity assumption, suggesting the effect is speech-specific (Vatakis & Spence, Acta Psychologica, 127(1), 12-23, 2008). However, the role of amplitude envelope (the changes in energy of a sound over time) was not previously appreciated within this paradigm. Here, we explore whether previous findings suggesting speech-specificity of the unity assumption were confounded by similarities in the amplitude envelopes of the contrasted auditory stimuli. Experiment 1 used natural events with clearly differentiated envelopes: single notes played on either a cello (bowing motion) or marimba (striking motion). Participants performed an un-speeded temporal order judgments task; viewing audio-visually matched (e.g., marimba auditory with marimba video) and mismatched (e.g., cello auditory with marimba video) versions of stimuli at various stimulus onset asynchronies, and were required to indicate which modality was presented first. As predicted, participants were less sensitive to temporal order in matched conditions, demonstrating that the unity assumption can facilitate the perception of synchrony outside of speech stimuli. Results from Experiments 2 and 3 revealed that when spectral information was removed from the original auditory stimuli, amplitude envelope alone could not facilitate the influence of audiovisual unity. We propose that both amplitude envelope and spectral acoustic cues affect the percept of audiovisual unity, working in concert to help an observer determine when to integrate across modalities.

  13. Syllabic (~2-5 Hz) and fluctuation (~1-10 Hz) ranges in speech and auditory processing

    Science.gov (United States)

    Edwards, Erik; Chang, Edward F.

    2013-01-01

    Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819

  14. Development of a tactile stimulator with simultaneous visual and auditory stimulation using E-Prime software.

    Science.gov (United States)

    Kim, Hyung-Sik; Yeon, Hong-Won; Choi, Mi-Hyun; Kim, Ji-Hye; Choi, Jin-Seung; Park, Jang-Yeon; Jun, Jae-Hoon; Yi, Jeong-Han; Tack, Gye-Rae; Chung, Soon-Cheol

    2013-01-01

    In this study, a tactile stimulator was developed, which can stimulate visual and auditory senses simultaneously by using the E-Prime software. This study tried to compensate for systematic stimulation control and other problems that occurred with previously developed tactile stimulators. The newly developed system consists of three units: a control unit, a drive unit and a vibrator. Since the developed system is a small, lightweight, simple structure with low electrical consumption, a maximum of 35 stimulation channels and various visual and auditory stimulation combinations without delay time, the previous systematic problem is corrected in this study. The system was designed to stimulate any part of the body including the fingers. Since the developed tactile stimulator used E-Prime software, which is widely used in the study of visual and auditory senses, the stimulator is expected to be highly practical due to a diverse combination of stimuli, such as tactile-visual, tactile-auditory, visual-auditory and tactile-visual-auditory stimulation.

  15. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  16. Primary Auditory Cortex is Required for Anticipatory Motor Response.

    Science.gov (United States)

    Li, Jingcheng; Liao, Xiang; Zhang, Jianxiong; Wang, Meng; Yang, Nian; Zhang, Jun; Lv, Guanghui; Li, Haohong; Lu, Jian; Ding, Ran; Li, Xingyi; Guang, Yu; Yang, Zhiqi; Qin, Han; Jin, Wenjun; Zhang, Kuan; He, Chao; Jia, Hongbo; Zeng, Shaoqun; Hu, Zhian; Nelken, Israel; Chen, Xiaowei

    2017-06-01

    The ability of the brain to predict future events based on the pattern of recent sensory experience is critical for guiding animal's behavior. Neocortical circuits for ongoing processing of sensory stimuli are extensively studied, but their contributions to the anticipation of upcoming sensory stimuli remain less understood. We, therefore, used in vivo cellular imaging and fiber photometr