WorldWideScience

Sample records for auditory temporal discriminations

  1. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians

    Directory of Open Access Journals (Sweden)

    Kumar, Prawin

    2015-12-01

    Full Text Available Introduction Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF. All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0. Result Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t-test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion The present study showed enhanced temporal resolution ability and better (lower active discrimination threshold in vocal musicians in comparison to non-musicians.

  2. Temporal Resolution and Active Auditory Discrimination Skill in Vocal Musicians.

    Science.gov (United States)

    Kumar, Prawin; Sanju, Himanshu Kumar; Nikhil, J

    2016-10-01

    Introduction  Enhanced auditory perception in musicians is likely to result from auditory perceptual learning during several years of training and practice. Many studies have focused on biological processing of auditory stimuli among musicians. However, there is a lack of literature on temporal resolution and active auditory discrimination skills in vocal musicians. Objective  The aim of the present study is to assess temporal resolution and active auditory discrimination skill in vocal musicians. Method  The study participants included 15 vocal musicians with a minimum professional experience of 5 years of music exposure, within the age range of 20 to 30 years old, as the experimental group, while 15 age-matched non-musicians served as the control group. We used duration discrimination using pure-tones, pulse-train duration discrimination, and gap detection threshold tasks to assess temporal processing skills in both groups. Similarly, we assessed active auditory discrimination skill in both groups using Differential Limen of Frequency (DLF). All tasks were done using MATLab software installed in a personal computer at 40dBSL with maximum likelihood procedure. The collected data were analyzed using SPSS (version 17.0). Result  Descriptive statistics showed better threshold for vocal musicians compared with non-musicians for all tasks. Further, independent t -test showed that vocal musicians performed significantly better compared with non-musicians on duration discrimination using pure tone, pulse train duration discrimination, gap detection threshold, and differential limen of frequency. Conclusion  The present study showed enhanced temporal resolution ability and better (lower) active discrimination threshold in vocal musicians in comparison to non-musicians.

  3. Inactivation of the left auditory cortex impairs temporal discrimination in the rat

    Czech Academy of Sciences Publication Activity Database

    Rybalko, Natalia; Šuta, Daniel; Popelář, Jiří; Syka, Josef

    2010-01-01

    Roč. 209, č. 1 (2010), s. 123-130 ISSN 0166-4328 R&D Projects: GA ČR GA309/07/1336; GA MŠk(CZ) LC554 Institutional research plan: CEZ:AV0Z50390512 Keywords : auditory cortex * temporal discrimination * hemispheric lateralization Subject RIV: FH - Neurology Impact factor: 3.393, year: 2010

  4. Modulation of auditory evoked responses to spectral and temporal changes by behavioral discrimination training

    Directory of Open Access Journals (Sweden)

    Okamoto Hidehiko

    2009-12-01

    Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting

  5. Temporal plasticity in auditory cortex improves neural discrimination of speech sounds.

    Science.gov (United States)

    Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P

    Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  7. Auditory Discrimination and Auditory Memory as Predictors of Academic Success.

    Science.gov (United States)

    Warnock, Mairi; Boss, Marvin W.

    1987-01-01

    Eighty fourth-graders enrolled in an English/French bilingual program in Canada were administered an auditory skills battery of six tests to measure auditory discrimination and short-term auditory memory. It was concluded that a relationship exists between certain auditory perceptual abilities and school achievement independent of cognitive…

  8. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  9. Representation of complex vocalizations in the Lusitanian toadfish auditory system: evidence of fine temporal, frequency and amplitude discrimination

    Science.gov (United States)

    Vasconcelos, Raquel O.; Fonseca, Paulo J.; Amorim, M. Clara P.; Ladich, Friedrich

    2011-01-01

    Many fishes rely on their auditory skills to interpret crucial information about predators and prey, and to communicate intraspecifically. Few studies, however, have examined how complex natural sounds are perceived in fishes. We investigated the representation of conspecific mating and agonistic calls in the auditory system of the Lusitanian toadfish Halobatrachus didactylus, and analysed auditory responses to heterospecific signals from ecologically relevant species: a sympatric vocal fish (meagre Argyrosomus regius) and a potential predator (dolphin Tursiops truncatus). Using auditory evoked potential (AEP) recordings, we showed that both sexes can resolve fine features of conspecific calls. The toadfish auditory system was most sensitive to frequencies well represented in the conspecific vocalizations (namely the mating boatwhistle), and revealed a fine representation of duration and pulsed structure of agonistic and mating calls. Stimuli and corresponding AEP amplitudes were highly correlated, indicating an accurate encoding of amplitude modulation. Moreover, Lusitanian toadfish were able to detect T. truncatus foraging sounds and A. regius calls, although at higher amplitudes. We provide strong evidence that the auditory system of a vocal fish, lacking accessory hearing structures, is capable of resolving fine features of complex vocalizations that are probably important for intraspecific communication and other relevant stimuli from the auditory scene. PMID:20861044

  10. Auditory memory for temporal characteristics of sound.

    Science.gov (United States)

    Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike

    2008-05-01

    This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.

  11. Temporal-order judgment of visual and auditory stimuli: Modulations in situations with and without stimulus discrimination

    Directory of Open Access Journals (Sweden)

    Elisabeth eHendrich

    2012-08-01

    Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.

  12. Effects of Temporal Sequencing and Auditory Discrimination on Children's Memory Patterns for Tones, Numbers, and Nonsense Words

    Science.gov (United States)

    Gromko, Joyce Eastlund; Hansen, Dee; Tortora, Anne Halloran; Higgins, Daniel; Boccia, Eric

    2009-01-01

    The purpose of this study was to determine whether children's recall of tones, numbers, and words was supported by a common temporal sequencing mechanism; whether children's patterns of memory for tones, numbers, and nonsense words were the same despite differences in symbol systems; and whether children's recall of tones, numbers, and nonsense…

  13. Auditory Discrimination Learning: Role of Working Memory.

    Science.gov (United States)

    Zhang, Yu-Xuan; Moore, David R; Guiraud, Jeanne; Molloy, Katharine; Yan, Ting-Ting; Amitay, Sygal

    2016-01-01

    Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  14. Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    Science.gov (United States)

    Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean

    2015-01-01

    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…

  15. Spatiotemporal Relationships among Audiovisual Stimuli Modulate Auditory Facilitation of Visual Target Discrimination.

    Science.gov (United States)

    Li, Qi; Yang, Huamin; Sun, Fang; Wu, Jinglong

    2015-03-01

    Sensory information is multimodal; through audiovisual interaction, task-irrelevant auditory stimuli tend to speed response times and increase visual perception accuracy. However, mechanisms underlying these performance enhancements have remained unclear. We hypothesize that task-irrelevant auditory stimuli might provide reliable temporal and spatial cues for visual target discrimination and behavioral response enhancement. Using signal detection theory, the present study investigated the effects of spatiotemporal relationships on auditory facilitation of visual target discrimination. Three experiments were conducted where an auditory stimulus maintained reliable temporal and/or spatial relationships with visual target stimuli. Results showed that perception sensitivity (d') to visual target stimuli was enhanced only when a task-irrelevant auditory stimulus maintained reliable spatiotemporal relationships with a visual target stimulus. When only reliable spatial or temporal information was contained, perception sensitivity was not enhanced. These results suggest that reliable spatiotemporal relationships between visual and auditory signals are required for audiovisual integration during a visual discrimination task, most likely due to a spread of attention. These results also indicate that auditory facilitation of visual target discrimination follows from late-stage cognitive processes rather than early stage sensory processes. © 2015 SAGE Publications.

  16. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...... was tested in conditions of tone-in-noise masking, intensity discrimination, spectral masking with tones and narrowband noises, forward masking with (on- and off-frequency) noise- and pure-tone maskers, and amplitude modulation detection using different noise carrier bandwidths. One of the key properties...

  17. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    Science.gov (United States)

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the

  18. Early auditory enrichment with music enhances auditory discrimination learning and alters NR2B protein expression in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2009-01-03

    Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.

  19. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli

    Directory of Open Access Journals (Sweden)

    Sanju, Himanshu Kumar

    2015-12-01

    Full Text Available Introduction Mismatch Negativity is a negative component of the event-related potential (ERP elicited by any discriminable changes in auditory stimulation. Objective The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN with pair of stimuli (pure tones, using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve in both conditions. Conclusion The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz and gross (1000 Hz and 1100 Hz difference in auditory stimuli at a higher level (endogenous of the auditory system.

  20. Comparison of Pre-Attentive Auditory Discrimination at Gross and Fine Difference between Auditory Stimuli.

    Science.gov (United States)

    Sanju, Himanshu Kumar; Kumar, Prawin

    2016-10-01

    Introduction  Mismatch Negativity is a negative component of the event-related potential (ERP) elicited by any discriminable changes in auditory stimulation. Objective  The present study aimed to assess pre-attentive auditory discrimination skill with fine and gross difference between auditory stimuli. Method  Seventeen normal hearing individual participated in the study. To assess pre-attentive auditory discrimination skill with fine difference between auditory stimuli, we recorded mismatch negativity (MMN) with pair of stimuli (pure tones), using /1000 Hz/ and /1010 Hz/ with /1000 Hz/ as frequent stimulus and /1010 Hz/ as infrequent stimulus. Similarly, we used /1000 Hz/ and /1100 Hz/ with /1000 Hz/ as frequent stimulus and /1100 Hz/ as infrequent stimulus to assess pre-attentive auditory discrimination skill with gross difference between auditory stimuli. The study included 17 subjects with informed consent. We analyzed MMN for onset latency, offset latency, peak latency, peak amplitude, and area under the curve parameters. Result  Results revealed that MMN was present only in 64% of the individuals in both conditions. Further Multivariate Analysis of Variance (MANOVA) showed no significant difference in all measures of MMN (onset latency, offset latency, peak latency, peak amplitude, and area under the curve) in both conditions. Conclusion  The present study showed similar pre-attentive skills for both conditions: fine (1000 Hz and 1010 Hz) and gross (1000 Hz and 1100 Hz) difference in auditory stimuli at a higher level (endogenous) of the auditory system.

  1. Auditory capture of visual motion: effects on perception and discrimination.

    Science.gov (United States)

    McCourt, Mark E; Leone, Lynnette M

    2016-09-28

    We asked whether the perceived direction of visual motion and contrast thresholds for motion discrimination are influenced by the concurrent motion of an auditory sound source. Visual motion stimuli were counterphasing Gabor patches, whose net motion energy was manipulated by adjusting the contrast of the leftward-moving and rightward-moving components. The presentation of these visual stimuli was paired with the simultaneous presentation of auditory stimuli, whose apparent motion in 3D auditory space (rightward, leftward, static, no sound) was manipulated using interaural time and intensity differences, and Doppler cues. In experiment 1, observers judged whether the Gabor visual stimulus appeared to move rightward or leftward. In experiment 2, contrast discrimination thresholds for detecting the interval containing unequal (rightward or leftward) visual motion energy were obtained under the same auditory conditions. Experiment 1 showed that the perceived direction of ambiguous visual motion is powerfully influenced by concurrent auditory motion, such that auditory motion 'captured' ambiguous visual motion. Experiment 2 showed that this interaction occurs at a sensory stage of processing as visual contrast discrimination thresholds (a criterion-free measure of sensitivity) were significantly elevated when paired with congruent auditory motion. These results suggest that auditory and visual motion signals are integrated and combined into a supramodal (audiovisual) representation of motion.

  2. Longitudinal changes in auditory discrimination in normal children and children with language-learning problems.

    Science.gov (United States)

    Elliott, L L; Hammer, M A

    1988-11-01

    Two groups of children--one progressing normally in school and the other exhibiting language-learning problems--were tested in each of 3 years on a set of fine-grained auditory discrimination tasks that required listening for small acoustic differences. Children's ages ranged from 6 to 9 years; there were 21 children per group. The children with language-learning problems, despite having normal intelligence and normal pure-tone sensitivity, showed poorer auditory discrimination than normal children for temporally based acoustic differences. This effect continued across the 3 years. Children with language-learning problems also exhibited poorer receptive vocabulary and language performance as well as more deviations from standard Midwest articulation than children making normal progress in school. All children had hearing within the normal range, but at some frequencies there was a significant association of pure-tone sensitivity with performance on the auditory discrimination, receptive language, and speech production tasks.

  3. Auditory Discrimination as a Condition for E-Learning Based Speech Therapy: A Proposal for an Auditory Discrimination Test (ADT) for Adult Dysarthric Speakers

    Science.gov (United States)

    Beijer, L. J.; Rietveld, A. C. M.; van Stiphout, A. J. L.

    2011-01-01

    Background: Web based speech training for dysarthric speakers, such as E-learning based Speech Therapy (EST), puts considerable demands on auditory discrimination abilities. Aims: To discuss the development and the evaluation of an auditory discrimination test (ADT) for the assessment of auditory speech discrimination skills in Dutch adult…

  4. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  5. Auditory temporal resolution threshold in elderly individuals.

    Science.gov (United States)

    Queiroz, Daniela Soares de; Momensohn-Santos, Teresa Maria; Branco-Barreiro, Fátima Cristina Alves

    2010-01-01

    the Random Gap Detection Test (RGDT) evaluates temporal resolution threshold. There are doubts as to whether performance in this task remains unchanged with the aging process. At the same time, there is a concern about how much the difficulties of communication experienced by elderly individuals are related to the deterioration of temporal resolution. to determine auditory temporal resolution threshold in elderly individuals with normal peripheral hearing or symmetric mild sensorineural hearing loss, and to correlate findings with gender, age, audiometric findings and scores obtained in the Self - Assessment of Communication (SAC) questionnaire. 63 elderly individuals, aged between 60 and 80 years (53 women and 10 men), were submitted to the RGDT and the SAC. statistical analysis of the relationship between gender and the RGDT indicated that the performance of elderly females was statistically poorer when compared to elderly males. Age and audiometric configuration did not correlate to performance in the RDGT and in the SAC. The results indicate that in the SAC both genders presented no significant complaints about communication difficulties regardless of the outcome obtained in the RGDT or audiometric configuration. the average temporal resolution threshold for women was 104.81ms. Considering gender, females did not present correlations between age and audiometric configuration, not only when considering the RGDT results but also when analyzing the SAC results.

  6. Temporal expectation weights visual signals over auditory signals.

    Science.gov (United States)

    Menceloglu, Melisa; Grabowecky, Marcia; Suzuki, Satoru

    2017-04-01

    Temporal expectation is a process by which people use temporally structured sensory information to explicitly or implicitly predict the onset and/or the duration of future events. Because timing plays a critical role in crossmodal interactions, we investigated how temporal expectation influenced auditory-visual interaction, using an auditory-visual crossmodal congruity effect as a measure of crossmodal interaction. For auditory identification, an incongruent visual stimulus produced stronger interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. In contrast, for visual identification, an incongruent auditory stimulus produced weaker interference when the crossmodal stimulus was presented with an expected rather than an unexpected timing. The fact that temporal expectation made visual distractors more potent and visual targets less susceptible to auditory interference suggests that temporal expectation increases the perceptual weight of visual signals.

  7. Mind the gap: temporal discrimination and dystonia.

    Science.gov (United States)

    Sadnicka, A; Daum, C; Cordivari, C; Bhatia, K P; Rothwell, J C; Manohar, S; Edwards, M J

    2017-06-01

    One of the most widely studied perceptual measures of sensory dysfunction in dystonia is the temporal discrimination threshold (TDT) (the shortest interval at which subjects can perceive that there are two stimuli rather than one). However the elevated thresholds described may be due to a number of potential mechanisms as current paradigms test not only temporal discrimination but also extraneous sensory and decision-making parameters. In this study two paradigms designed to better quantify temporal processing are presented and a decision-making model is used to assess the influence of decision strategy. 22 patients with cervical dystonia and 22 age-matched controls completed two tasks (i) temporal resolution (a randomized, automated version of existing TDT paradigms) and (ii) interval discrimination (rating the length of two consecutive intervals). In the temporal resolution task patients had delayed (P = 0.021) and more variable (P = 0.013) response times but equivalent discrimination thresholds. Modelling these effects suggested this was due to an increased perceptual decision boundary in dystonia with patients requiring greater evidence before committing to decisions (P = 0.020). Patient performance on the interval discrimination task was normal. Our work suggests that previously observed abnormalities in TDT may not be due to a selective sensory deficit of temporal processing as decision-making itself is abnormal in cervical dystonia. © 2017 EAN.

  8. Hand proximity facilitates spatial discrimination of auditory tones

    Science.gov (United States)

    Tseng, Philip; Yu, Jiaxin; Tzeng, Ovid J. L.; Hung, Daisy L.; Juan, Chi-Hung

    2014-01-01

    The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s) would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Experiment 1: left or right side), pitch discrimination (Experiment 2: high, med, or low tone), and spatial-plus-pitch (Experiment 3: left or right; high, med, or low) discrimination task. In Experiment 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand's reaction time (RT). No effect of hand proximity was found in Experiments 2 or 3, where a choice RT task requiring pitch discrimination was used. Together, these results that the perceptual and attentional effect of hand proximity is not limited to one specific modality, but applicable to the entire “space” near the hands, including stimuli of different modality (at least visual and auditory) within that space. While these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. (2006), we also discuss the possibility of a dual mechanism hypothesis to reconcile findings from the multimodal and magno/parvocellular account. PMID:24966839

  9. Auditory Phoneme Discrimination in Illiterates: Mismatch Negativity--A Question of Literacy?

    Science.gov (United States)

    Schaadt, Gesa; Pannekamp, Ann; van der Meer, Elke

    2013-01-01

    These days, illiteracy is still a major problem. There is empirical evidence that auditory phoneme discrimination is one of the factors contributing to written language acquisition. The current study investigated auditory phoneme discrimination in participants who did not acquire written language sufficiently. Auditory phoneme discrimination was…

  10. Evaluation of central auditory discrimination abilities in older adults.

    Directory of Open Access Journals (Sweden)

    Claudia eFreigang

    2011-05-01

    Full Text Available The present study focuses on auditory discrimination abilities in older adults aged 65-89 years. We applied the ‘Leipzig Inventory for Patient Psychoacoustic’ (LIPP, a psychoacoustic test battery specifically designed to identify deficits in central auditory processing. These tests quantify the just noticeable differences (JND for the three basic acoustic parameters (i.e. frequency, intensity, and signal duration. Three different test modes (monaural, dichotic signal/noise [s/n] and interaural were used, stimulus level was 35dB sensation level. The tests are designed as three-alternative forced-choice procedure with a maximum-likelihood procedure estimating p=0,5 correct response value. These procedures have proven to be highly efficient and provide a reliable outcome. The measurements yielded significant age-dependent deteriorations in the ability to discriminate single acoustic features pointing to progressive impairments in central auditory processing. The degree of deterioration was correlated to the different acoustic features and to the test modes. Most prominent, interaural frequency and signal duration discrimination at low test frequencies was elevated which indicates a deterioration of time- and phase-dependent processing at brain stem and cortical levels. LIPP proves to be an effective tool to identify basic pathophysiological mechanisms and the source of a specific impairment in auditory processing of the elderly.

  11. Hand proximity facilitates spatial discrimination of auditory tones

    Directory of Open Access Journals (Sweden)

    Philip eTseng

    2014-06-01

    Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.

  12. Auditory temporal-order thresholds show no gender differences

    NARCIS (Netherlands)

    van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.

    2007-01-01

    Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in

  13. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  14. Effects of transient auditory deprivation during critical periods on the development of auditory temporal processing.

    Science.gov (United States)

    Kim, Bong Jik; Kim, Jungyoon; Park, Il-Yong; Jung, Jae Yun; Suh, Myung-Whan; Oh, Seung-Ha

    2018-01-01

    The central auditory pathway matures through sensory experiences and it is known that sensory experiences during periods called critical periods exert an important influence on brain development. The present study aimed to investigate whether temporary auditory deprivation during critical periods (CPs) could have a detrimental effect on the development of auditory temporal processing. Twelve neonatal rats were randomly assigned to control and study groups; Study group experienced temporary (18-20 days) auditory deprivation during CPs (Early deprivation study group). Outcome measures included changes in auditory brainstem response (ABR), gap prepulse inhibition of the acoustic startle reflex (GPIAS), and gap detection threshold (GDT). To further delineate the specific role of CPs in the outcome measures above, the same paradigm was applied in adult rats (Late deprivation group) and the findings were compared with those of the neonatal rats. Soon after the restoration of hearing, early deprivation study animals showed a significantly lower GPIAS at intermediate gap durations and a larger GDT than early deprivation controls, but these differences became insignificant after subsequent auditory inputs. Additionally, the ABR results showed significantly delayed latencies of waves IV, V, and interpeak latencies of wave I-III and wave I-V in study group. Late deprivation group didn't exhibit any deterioration in temporal processing following sensory deprivation. Taken together, the present results suggest that transient auditory deprivation during CPs might cause reversible disruptions in the development of temporal processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Stability of auditory discrimination and novelty processing in physiological aging.

    Science.gov (United States)

    Raggi, Alberto; Tasca, Domenica; Rundo, Francesco; Ferri, Raffaele

    2013-01-01

    Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs) allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN) and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  16. Stability of Auditory Discrimination and Novelty Processing in Physiological Aging

    Directory of Open Access Journals (Sweden)

    Alberto Raggi

    2013-01-01

    Full Text Available Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.

  17. Temporal Organization of Sound Information in Auditory Memory.

    Science.gov (United States)

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  18. Temporal Organization of Sound Information in Auditory Memory

    Directory of Open Access Journals (Sweden)

    Kun Song

    2017-06-01

    Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.

  19. Auditory temporal processing skills in musicians with dyslexia.

    Science.gov (United States)

    Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha

    2014-08-01

    The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.

  20. Temporal factors affecting somatosensory-auditory interactions in speech processing

    Directory of Open Access Journals (Sweden)

    Takayuki eIto

    2014-11-01

    Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.

  1. Evaluation of auditory processing and phonemic discrimination in children with normal and disordered phonological development.

    Science.gov (United States)

    Attoni, Tiago Mendonça; Quintas, Victor Gandra; Mota, Helena Bolli

    2010-01-01

    Auditory processing and phonemic discrimination are essential for communication. Retrospective. To evaluate auditory processing and phonemic discrimination in children with normal and disordered phonological development. An evaluation of 46 children was carried out: 22 had phonological disorders and 24 had normally developing speech. Diotic , monotic and dichotic tests were applied to assess auditory processing and a test to evaluate phonemic discrimination abilities. Cross-sectional, contemporary. The values of normally-developing children were within the normal range in all auditory processing tests; these children attained maximum phonemic discrimination test scores. Children with phonological disorders performed worse in the latter, and presented disordered auditory processing. Auditory processing and phonemic discrimination in children with phonological disorders are altered.

  2. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  3. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  4. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  5. Temporal Organization of Sound Information in Auditory Memory

    OpenAIRE

    Song, Kun; Luo, Huan

    2017-01-01

    Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed ...

  6. Non-verbal auditory cognition in patients with temporal epilepsy before and after anterior temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-11-01

    Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.

  7. Auditory adaptation improves tactile frequency perception

    NARCIS (Netherlands)

    Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.

    2017-01-01

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals

  8. Improving the reading skills of Jordanian students with auditory discrimination problems

    OpenAIRE

    Afaf Abdullah Mukdadi; Abdul-Monim Batiha; Jose Luis Ortega Martin

    2016-01-01

    Background: Some of the developmental problems facing students with difficulties in learning are those related to auditory perception which, in turn, can negatively affect the individual’s learning process. Aim: Evaluating a training program prepared to develop the auditory discrimination skills of students who suffer from auditory discrimination problems. Design: A quasi-experimental research design was used in this study. The study sample was divided into two equal groups: experimen...

  9. Auditory temporal processing in healthy aging: a magnetoencephalographic study

    Directory of Open Access Journals (Sweden)

    Manemann Elisabeth

    2009-04-01

    Full Text Available Abstract Background Impaired speech perception is one of the major sequelae of aging. In addition to peripheral hearing loss, central deficits of auditory processing are supposed to contribute to the deterioration of speech perception in older individuals. To test the hypothesis that auditory temporal processing is compromised in aging, auditory evoked magnetic fields were recorded during stimulation with sequences of 4 rapidly recurring speech sounds in 28 healthy individuals aged 20 – 78 years. Results The decrement of the N1m amplitude during rapid auditory stimulation was not significantly different between older and younger adults. The amplitudes of the middle-latency P1m wave and of the long-latency N1m, however, were significantly larger in older than in younger participants. Conclusion The results of the present study do not provide evidence for the hypothesis that auditory temporal processing, as measured by the decrement (short-term habituation of the major auditory evoked component, the N1m wave, is impaired in aging. The differences between these magnetoencephalographic findings and previously published behavioral data might be explained by differences in the experimental setting between the present study and previous behavioral studies, in terms of speech rate, attention, and masking noise. Significantly larger amplitudes of the P1m and N1m waves suggest that the cortical processing of individual sounds differs between younger and older individuals. This result adds to the growing evidence that brain functions, such as sensory processing, motor control and cognitive processing, can change during healthy aging, presumably due to experience-dependent neuroplastic mechanisms.

  10. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Recurrent coupling improves discrimination of temporal spike patterns

    Directory of Open Access Journals (Sweden)

    Chun-Wei eYuan

    2012-05-01

    Full Text Available Despite the ubiquitous presence of recurrent synaptic connections insensory neuronal systems, their general functional purpose is not wellunderstood. A recent conceptual advance has been achieved by theoriesof reservoir computing in which recurrent networks have been proposedto generate short-term memory as well as to improve neuronalrepresentation of the sensory input for subsequent computations.Here, we present a numerical study on the distinct effects ofinhibitory and excitatory recurrence in a canonical linearclassification task. It is found that both types of coupling improvethe ability to discriminate temporal spike patterns as compared to apurely feed-forward system, although in different ways. For a largeclass of inhibitory networks, the network's performance is optimal aslong as a fraction of roughly 50% of neurons per stimulus is activein the resulting population code. Thereby the contribution of inactiveneurons to the neural code is found to be even more informative thanthat of the active neurons, generating an inherent robustness ofclassification performance against temporal jitter of the inputspikes. Excitatory couplings are found to not only produce ashort-term memory buffer but also to improve linear separability ofthe population patterns by evoking more irregular firing as comparedto the purely inhibitory case. As the excitatory connectivity becomesmore sparse, firing becomes more variable and pattern separabilityimproves. We argue that the proposed paradigm is particularlywell-suited as a conceptual framework for processing of sensoryinformation in the auditory pathway.

  12. Middle components of the auditory evoked response in bilateral temporal lobe lesions. Report on a patient with auditory agnosia

    DEFF Research Database (Denmark)

    Parving, A; Salomon, G; Elberling, Claus

    1980-01-01

    An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements...... that the middle components cannot be generated exclusively, if at all, in the primary auditory cortex, located in the temporal lobe. Furthermore, the responses are found to be of neurogenic origin according to the methodological procedure applied....

  13. Altered auditory and multisensory temporal processing in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Leslie D Kwakye

    2011-01-01

    Full Text Available Autism spectrum disorders (ASD are characterized by deficits in social reciprocity and communication, as well as repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli have also frequently been reported. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically-developing (TD children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.

  14. Temporal integration in duration and number discrimination.

    Science.gov (United States)

    Meck, W H; Church, R M; Gibbon, J

    1985-10-01

    Temporal integration in duration and number discrimination by rats was investigated with the use of a psychophysical choice procedure. A response on one lever ("short" response) following a 1-s white-noise signal was followed by food reinforcement, and a response on the other lever ("long" response) following a 2-s white-noise signal was also followed by food reinforcement. Either response following a signal of one of five intermediate durations was unreinforced. This led to a psychophysical function in which the probability of a long response was related to signal duration in an ogival manner. On 2 test days, a white-noise signal with 5, 6, 7, 8, or 10 segments of either 0.5-s on and 0.5-s off or 1-s on and 1-s off was presented, and a choice response following these signals was unreinforced. The probability of a long response was the same function of a segmented signal and a continuous signal if each segment was considered equivalent to 200 ms. A quantitative fit of a scalar estimation theory suggested that the latencies to initiate temporal integration and to terminate the process are both about 200 ms, and that the same internal accumulation process can be used for counting and timing.

  15. Maturation of Visual and Auditory Temporal Processing in School-Aged Children

    Science.gov (United States)

    Dawes, Piers; Bishop, Dorothy V. M.

    2008-01-01

    Purpose: To examine development of sensitivity to auditory and visual temporal processes in children and the association with standardized measures of auditory processing and communication. Methods: Normative data on tests of visual and auditory processing were collected on 18 adults and 98 children aged 6-10 years of age. Auditory processes…

  16. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  17. Neural correlates of auditory temporal predictions during sensorimotor synchronization.

    Science.gov (United States)

    Pecenka, Nadine; Engel, Annerose; Keller, Peter E

    2013-01-01

    Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  18. Spectral-temporal EEG dynamics of speech discrimination processing in infants during sleep.

    Science.gov (United States)

    Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine

    2017-03-22

    Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language

  19. Effect of passive smoking on auditory temporal resolution in children.

    Science.gov (United States)

    Durante, Alessandra Spada; Massa, Beatriz; Pucci, Beatriz; Gudayol, Nicolly; Gameiro, Marcella; Lopes, Cristiane

    2017-06-01

    To determine the effect of passive smoking on auditory temporal resolution in primary school children, based on the hypothesis that individuals who are exposed to smoking exhibit impaired performance. Auditory temporal resolution was evaluated using the Gaps In Noise (GIN) test. Exposure to passive smoking was assessed by measuring nicotine metabolite (cotinine) excreted in the first urine of the day. The study included 90 children with mean age of 10.2 ± 0.1 years old from a public school in São Paulo. Participants were divided into two groups: a study group, comprising 45 children exposed to passive smoking (cotinine > 5 ng/mL); and a control group, constituting 45 children who were not exposed to passive smoking. All participants had normal audiometry and immittance test results. Statistically significant differences (p smoking had poorer performance both in terms of thresholds and correct responses percentage on auditory temporal resolution assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Encoding of Auditory Temporal Gestalt in the Human Brain.

    Science.gov (United States)

    Notter, Michael P; Hanke, Michael; Murray, Micah M; Geiser, Eveline

    2018-01-20

    The perception of an acoustic rhythm is invariant to the absolute temporal intervals constituting a sound sequence. It is unknown where in the brain temporal Gestalt, the percept emerging from the relative temporal proximity between acoustic events, is encoded. Two different relative temporal patterns, each induced by three experimental conditions with different absolute temporal patterns as sensory basis, were presented to participants. A linear support vector machine classifier was trained to differentiate activation patterns in functional magnetic resonance imaging data to the 2 different percepts. Across the sensory constituents the classifier decoded which percept was perceived. A searchlight analysis localized activation patterns specific to the temporal Gestalt bilaterally to the temporoparietal junction, including the planum temporale and supramarginal gyrus, and unilaterally to the right inferior frontal gyrus (pars opercularis). We show that auditory areas not only process absolute temporal intervals, but also integrate them into percepts of Gestalt and that encoding of these percepts persists in high-level associative areas. The findings complement existing knowledge regarding the processing of absolute temporal patterns to the processing of relative temporal patterns relevant to the sequential binding of perceptual elements into Gestalt. © The Author 2018. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Selective Increase of Auditory Cortico-Striatal Coherence during Auditory-Cued Go/NoGo Discrimination Learning

    Science.gov (United States)

    Schulz, Andreas L.; Woldeit, Marie L.; Gonçalves, Ana I.; Saldeitis, Katja; Ohl, Frank W.

    2016-01-01

    Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcement models, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functional coupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed. PMID:26793085

  2. Selective increase of auditory cortico-striatal coherence during auditory-cued Go/NoGo discrimination learning.

    Directory of Open Access Journals (Sweden)

    Andreas L. Schulz

    2016-01-01

    Full Text Available Goal directed behavior and associated learning processes are tightly linked to neuronal activity in the ventral striatum. Mechanisms that integrate task relevant sensory information into striatal processing during decision making and learning are implicitly assumed in current reinforcementmodels, yet they are still weakly understood. To identify the functional activation of cortico-striatal subpopulations of connections during auditory discrimination learning, we trained Mongolian gerbils in a two-way active avoidance task in a shuttlebox to discriminate between falling and rising frequency modulated tones with identical spectral properties. We assessed functional coupling by analyzing the field-field coherence between the auditory cortex and the ventral striatum of animals performing the task. During the course of training, we observed a selective increase of functionalcoupling during Go-stimulus presentations. These results suggest that the auditory cortex functionally interacts with the ventral striatum during auditory learning and that the strengthening of these functional connections is selectively goal-directed.

  3. Anatomical pathways for auditory memory II: information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Science.gov (United States)

    Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C

    2015-01-01

    Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  4. Stimulus-specific effects of noradrenaline in auditory cortex: implications for the discrimination of communication sounds.

    Science.gov (United States)

    Gaucher, Quentin; Edeline, Jean-Marc

    2015-02-15

    Many studies have described the action of Noradrenaline (NA) on the properties of cortical receptive fields, but none has assessed how NA affects the discrimination abilities of cortical cells between natural stimuli. In the present study, we compared the consequences of NA topical application on spectro-temporal receptive fields (STRFs) and responses to communication sounds in the primary auditory cortex. NA application reduced the STRFs (an effect replicated by the alpha1 agonist Phenylephrine) but did not change, on average, the responses to communication sounds. For cells exhibiting increased evoked responses during NA application, the discrimination abilities were enhanced as quantified by Mutual Information. The changes induced by NA on parameters extracted from the STRFs and from responses to communication sounds were not related. The alterations exerted by neuromodulators on neuronal selectivity have been the topic of a vast literature in the visual, somatosensory, auditory and olfactory cortices. However, very few studies have investigated to what extent the effects observed when testing these functional properties with artificial stimuli can be transferred to responses evoked by natural stimuli. Here, we tested the effect of noradrenaline (NA) application on the responses to pure tones and communication sounds in the guinea-pig primary auditory cortex. When pure tones were used to assess the spectro-temporal receptive field (STRF) of cortical cells, NA triggered a transient reduction of the STRFs in both the spectral and the temporal domain, an effect replicated by the α1 agonist phenylephrine whereas α2 and β agonists induced STRF expansion. When tested with communication sounds, NA application did not produce significant effects on the firing rate and spike timing reliability, despite the fact that α1, α2 and β agonists by themselves had significant effects on these measures. However, the cells whose evoked responses were increased by NA

  5. Profile of auditory temporal processing in older listeners.

    Science.gov (United States)

    Gordon-Salant, S; Fitzgibbons, P J

    1999-04-01

    This investigation examined age-related performance differences on a range of speech and nonspeech measures involving temporal manipulation of acoustic signals and variation of stimulus complexity. The goal was to identify a subset of temporally mediated measures that effectively distinguishes the performance patterns of younger and older listeners, with and without hearing loss. The nonspeech measures included duration discrimination for simple tones and gaps, duration discrimination for tones and gaps embedded within complex sequences, and discrimination of temporal order. The speech measures were undistorted speech, time-compressed speech, reverberant speech, and combined time-compressed + reverberant speech. All speech measures were presented both in quiet and in noise. Strong age effects were observed for the nonspeech measures, particularly in the more complex stimulus conditions. Additionally, age effects were observed for all time-compressed speech conditions and some reverberant speech conditions, in both quiet and noise. Effects of hearing loss were observed also for the speech measures only. Discriminant function analysis derived a formula, based on a subset of these measures, for classifying individuals according to temporal performance consistent with age and hearing loss categories. The most important measures to accomplish this goal involved conditions featuring temporal manipulations of complex speech and nonspeech signals.

  6. Temporal auditory processing and phonological awareness in reading and writing disorders: preliminary data.

    Science.gov (United States)

    Soares, Aparecido José Couto; Sanches, Seisse Gabriela Gandolfi; Alves, Débora Cristina; Carvallo, Renata Mota Mamede; Cárnio, Maria Silvia

    2013-01-01

    To verify if there is an association between temporal auditory tests and phonological awareness in individuals with reading and writing disorders. Sixteen children were subjects of this study, aged between 7 and 12 years old, who had reading and writing disorders confirmed after specific assessment. All participants underwent phonological awareness assessment using CONFIAS test. In order to assess the auditory temporal processing, duration and frequency pattern tests were used. The descriptive analysis indicated low performance in syllabic and phonemic activities of phonological awareness as well as in temporal auditory tests. Fisher's test indicated association between disorders in auditory temporal processing and phonological awareness (p>0.001). It suggests that disorders in temporal processing contribute to low performance in phonological awareness tasks. There was association between performance in temporal auditory tests and in the phonological awareness. Data found provide reflections about including temporal auditory assessment among procedures used in the analysis of individuals with reading and writing disorders.

  7. Brain-generated estradiol drives long-term optimization of auditory coding to enhance the discrimination of communication signals.

    Science.gov (United States)

    Tremere, Liisa A; Pinaud, Raphael

    2011-03-02

    Auditory processing and hearing-related pathologies are heavily influenced by steroid hormones in a variety of vertebrate species, including humans. The hormone estradiol has been recently shown to directly modulate the gain of central auditory neurons, in real time, by controlling the strength of inhibitory transmission via a nongenomic mechanism. The functional relevance of this modulation, however, remains unknown. Here we show that estradiol generated in the songbird homolog of the mammalian auditory association cortex, rapidly enhances the effectiveness of the neural coding of complex, learned acoustic signals in awake zebra finches. Specifically, estradiol increases mutual information rates, coding efficiency, and the neural discrimination of songs. These effects are mediated by estradiol's modulation of both the rate and temporal coding of auditory signals. Interference with the local action or production of estradiol in the auditory forebrain of freely behaving animals disrupts behavioral responses to songs, but not to other behaviorally relevant communication signals. Our findings directly show that estradiol is a key regulator of auditory function in the adult vertebrate brain.

  8. Anatomical pathways for auditory memory II: Information from rostral superior temporal gyrus to dorsolateral temporal pole and medial temporal cortex.

    Directory of Open Access Journals (Sweden)

    Monica eMunoz-Lopez

    2015-05-01

    Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.

  9. Auditory event-related potentials in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Tomé, David; Sampaio, Mafalda; Mendes-Ribeiro, José; Barbosa, Fernando; Marques-Teixeira, João

    2014-12-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of idiopathic epilepsy, with onset from age 3 to 14 years. Although the prognosis for children with BECTS is excellent, some studies have revealed neuropsychological deficits in many domains, including language. Auditory event-related potentials (AERPs) reflect activation of different neuronal populations and are suggested to contribute to the evaluation of auditory discrimination (N1), attention allocation and phonological categorization (N2), and echoic memory (mismatch negativity--MMN). The scarce existing literature about this theme motivated the present study, which aims to investigate and document the existing AERP changes in a group of children with BECTS. AERPs were recorded, during the day, to pure and vocal tones and in a conventional auditory oddball paradigm in five children with BECTS (aged 8-12; mean=10 years; male=5) and in six gender and age-matched controls. Results revealed high amplitude of AERPs for the group of children with BECTS with a slight latency delay more pronounced in fronto-central electrodes. Children with BECTS may have abnormal central auditory processing, reflected by electrophysiological measures such as AERPs. In advance, AERPs seem a good tool to detect and reliably reveal cortical excitability in children with typical BECTS. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    in a temporally coherent manner. Based on this framework, the model was able to quantitatively predict perceptual experiments on stream segregation based on frequency separation and tone repetition rate, and onset and offset synchrony. Through the model framework, the influence of various processing stages......The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation...... on the stream segregation process was analysed. The model analysis showed that auditory frequency selectivity and physiological forward masking play a significant role in stream segregation based on frequency separation and tone rate. Secondly, the model analysis suggested that neural adaptation...

  11. Auditory temporal integration and the power function model.

    Science.gov (United States)

    Gerken, G M; Bhat, V K; Hutchison-Clutter, M

    1990-08-01

    The auditory temporal integration function was studied with the objective of improving both its quantitative description and the specification of its principle independent variable, stimulus duration. In Sec. I, temporal integration data from 20 studies were subjected to uniform analyses using standardized definitions of duration and two models of temporal integration. Analyses revealed that these data were best described by a power function model used in conjunction with a definition of duration, termed assigned duration, that de-emphasized the rise/fall portions of the stimuli. There was a strong effect of stimulus frequency and, in general, the slope of the temporal integration function was less than 10 dB per decade of duration; i.e., a power function exponent less than 1.0. In Sec. II, an experimental study was performed to further evaluate the models and definitions. Detection thresholds were measured in 11 normal-hearing human subjects using a total of 24 single-burst and multiple-burst acoustic stimuli of 3.125 kHz. The issues addressed are: the quantitative description of the temporal integration function; the definition of stimulus duration; the similarity of the integration processes for single-burst and multiple-burst stimuli; and the contribution of rise/fall time to the integration process. A power function in conjunction with the assigned duration definition was again most effective in describing the data. Single- and multiple-burst stimuli both seemed to be integrated by the same central mechanism, with data for each type of stimulus being described by a power function exponent of approximately 0.6 at 3.125 kHz. It was concluded that the contribution of the rise/fall portions of the stimuli can be factored out from the rest of the temporal integration process. In Sec. III, the conclusions that emerged from the review of published work and the present experimental work suggested that auditory temporal integration is best described by a power function

  12. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks.

    Science.gov (United States)

    Harinen, Kirsi; Rinne, Teemu

    2013-08-15

    We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Acoustic cue selection and discrimination under degradation: differential contributions of the inferior parietal and posterior temporal cortices.

    Science.gov (United States)

    Scharinger, Mathias; Henry, Molly J; Obleser, Jonas

    2015-02-01

    Auditory categorization is a vital skill for perceiving the acoustic environment. Categorization depends on the discriminability of the sensory input as well as on the ability of the listener to adaptively make use of the relevant features of the sound. Previous studies on categorization have focused either on speech sounds when studying discriminability or on visual stimuli when assessing optimal cue utilization. Here, by contrast, we examined neural sensitivity to stimulus discriminability and optimal cue utilization when categorizing novel, non-speech auditory stimuli not affected by long-term familiarity. In a functional magnetic resonance imaging (fMRI) experiment, listeners categorized sounds from two category distributions, differing along two acoustic dimensions: spectral shape and duration. By introducing spectral degradation after the first half of the experiment, we manipulated both stimulus discriminability and the relative informativeness of acoustic cues. Degradation caused an overall decrease in discriminability based on spectral shape, and therefore enhanced the informativeness of duration. A relative increase in duration-cue utilization was accompanied by increased activity in left parietal cortex. Further, discriminability modulated right planum temporale activity to a higher degree when stimuli were spectrally degraded than when they were not. These findings provide support for separable contributions of parietal and posterior temporal areas to perceptual categorization. The parietal cortex seems to support the selective utilization of informative stimulus cues, while the posterior superior temporal cortex as a primarily auditory brain area supports discriminability particularly under acoustic degradation. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Atypical central auditory speech-sound discrimination in children who stutter as indexed by the mismatch negativity

    NARCIS (Netherlands)

    Jansson-Verkasalo, E.; Eggers, K.; Järvenpää, A.; Suominen, K.; Van Den Bergh, B.R.H.; de Nil, L.; Kujala, T.

    2014-01-01

    Purpose Recent theoretical conceptualizations suggest that disfluencies in stuttering may arise from several factors, one of them being atypical auditory processing. The main purpose of the present study was to investigate whether speech sound encoding and central auditory discrimination, are

  15. Tactile and proprioceptive temporal discrimination are impaired in functional tremor.

    Directory of Open Access Journals (Sweden)

    Michele Tinazzi

    Full Text Available In order to obtain further information on the pathophysiology of functional tremor, we assessed tactile discrimination threshold and proprioceptive temporal discrimination motor threshold values in 11 patients with functional tremor, 11 age- and sex-matched patients with essential tremor and 13 healthy controls.Tactile discrimination threshold in both the right and left side was significantly higher in patients with functional tremor than in the other groups. Proprioceptive temporal discrimination threshold for both right and left side was significantly higher in patients with functional and essential tremor than in healthy controls. No significant correlation between discrimination thresholds and duration or severity of tremor was found.Temporal processing of tactile and proprioceptive stimuli is impaired in patients with functional tremor. The mechanisms underlying this impaired somatosensory processing and possible ways to apply these findings clinically merit further research.

  16. Visual Speech Fills in Both Discrimination and Identification of Non-Intact Auditory Speech in Children

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Herve

    2018-01-01

    To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. baez) coupled to non-intact (excised onsets) auditory speech (signified…

  17. Dopamine modulates memory consolidation of discrimination learning in the auditory cortex.

    Science.gov (United States)

    Schicknick, Horst; Reichenbach, Nicole; Smalla, Karl-Heinz; Scheich, Henning; Gundelfinger, Eckart D; Tischmeyer, Wolfgang

    2012-03-01

    In Mongolian gerbils, the auditory cortex is critical for discriminating rising vs. falling frequency-modulated tones. Based on our previous studies, we hypothesized that dopaminergic inputs to the auditory cortex during and shortly after acquisition of the discrimination strategy control long-term memory formation. To test this hypothesis, we studied frequency-modulated tone discrimination learning of gerbils in a shuttle box GO/NO-GO procedure following differential treatments. (i) Pre-exposure of gerbils to the frequency-modulated tones at 1 day before the first discrimination training session severely impaired the accuracy of the discrimination acquired in that session during the initial trials of a second training session, performed 1 day later. (ii) Local injection of the D1/D5 dopamine receptor antagonist SCH-23390 into the auditory cortex after task acquisition caused a discrimination deficit of similar extent and time course as with pre-exposure. This effect was dependent on the dose and time point of injection. (iii) Injection of the D1/D5 dopamine receptor agonist SKF-38393 into the auditory cortex after retraining caused a further discrimination improvement at the beginning of subsequent sessions. All three treatments, which supposedly interfered with dopamine signalling during conditioning and/or retraining, had a substantial impact on the dynamics of the discrimination performance particularly at the beginning of subsequent training sessions. These findings suggest that auditory-cortical dopamine activity after acquisition of a discrimination of complex sounds and after retrieval of weak frequency-modulated tone discrimination memory further improves memory consolidation, i.e. the correct association of two sounds with their respective GO/NO-GO meaning, in support of future memory recall. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  18. Specialized prefrontal auditory fields: organization of primate prefrontal-temporal pathways

    Directory of Open Access Journals (Sweden)

    Maria eMedalla

    2014-04-01

    Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.

  19. Temporal discrimination, a cervical dystonia endophenotype: penetrance and functional correlates.

    Science.gov (United States)

    Kimmich, Okka; Molloy, Anna; Whelan, Robert; Williams, Laura; Bradley, David; Balsters, Joshua; Molloy, Fiona; Lynch, Tim; Healy, Daniel G; Walsh, Cathal; O'Riordan, Seán; Reilly, Richard B; Hutchinson, Michael

    2014-05-01

    The pathogenesis of adult-onset primary dystonia remains poorly understood. There is variable age-related and gender-related expression of the phenotype, the commonest of which is cervical dystonia. Endophenotypes may provide insight into underlying genetic and pathophysiological mechanisms of dystonia. The temporal discrimination threshold (TDT)-the shortest time interval at which two separate stimuli can be detected as being asynchronous-is abnormal both in patients with cervical dystonia and in their unaffected first-degree relatives. Functional magnetic resonance imaging (fMRI) studies have shown that putaminal activation positively correlates with the ease of temporal discrimination between two stimuli in healthy individuals. We hypothesized that abnormal temporal discrimination would exhibit similar age-related and gender-related penetrance as cervical dystonia and that unaffected relatives with an abnormal TDT would have reduced putaminal activation during a temporal discrimination task. TDTs were examined in a group of 192 healthy controls and in 158 unaffected first-degree relatives of 84 patients with cervical dystonia. In 24 unaffected first-degree relatives, fMRI scanning was performed during a temporal discrimination task. The prevalence of abnormal TDTs in unaffected female relatives reached 50% after age 48 years; whereas, in male relatives, penetrance of the endophenotype was reduced. By fMRI, relatives who had abnormal TDTs, compared with relatives who had normal TDTs, had significantly less activation in the putamina and in the middle frontal and precentral gyri. Only the degree of reduction of putaminal activity correlated significantly with worsening of temporal discrimination. These findings further support abnormal temporal discrimination as an endophenotype of cervical dystonia involving disordered basal ganglia circuits. © 2014 International Parkinson and Movement Disorder Society.

  20. Temporal encoding in auditory evoked neuromagnetic fields: stochastic resonance.

    Science.gov (United States)

    Stufflebeam, S M; Poeppel, D; Roberts, T P

    2000-12-18

    Recent investigations have demonstrated that temporal patterns of sensory neural activity detected by magnetoencephalography (MEG) reflect features of the stimulus. In this study, neuromagnetic activity was investigated using an event detection algorithm based on the correlation coefficient. The results of the technique are compared with widely used methods of analysis in two experimental conditions and are shown to identify features in the single-trial MEG response that are not apparent in the response obtained by averaging across repeated trials. As an example of the technique, the physiologic jitter in latency associated with the M100 of auditory evoked fields was reproducibly measured. Specifically, higher intensity sounds were associated with an increased reliability. The technique was also applied to the noise-enhanced evoked auditory response, producing an objective demonstration of a cortical manifestation of the phenomenon of stochastic resonance-the paradoxical enhancement in the measurement of the signal-to-noise ratio (SNR) induced by optimal addition of noise to system input.

  1. Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

    Science.gov (United States)

    Dykstra, Andrew R.; Koh, Christine K.; Braida, Louis D.; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed. PMID:22957087

  2. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Dykstra, Andrew R; Koh, Christine K; Braida, Louis D; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  3. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    Full Text Available It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB. The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  4. Response properties of neurons in the cat's putamen during auditory discrimination.

    Science.gov (United States)

    Zhao, Zhenling; Sato, Yu; Qin, Ling

    2015-10-01

    The striatum integrates diverse convergent input and plays a critical role in the goal-directed behaviors. To date, the auditory functions of striatum are less studied. Recently, it was demonstrated that auditory cortico-striatal projections influence behavioral performance during a frequency discrimination task. To reveal the functions of striatal neurons in auditory discrimination, we recorded the single-unit spike activities in the putamen (dorsal striatum) of free-moving cats while performing a Go/No-go task to discriminate the sounds with different modulation rates (12.5 Hz vs. 50 Hz) or envelopes (damped vs. ramped). We found that the putamen neurons can be broadly divided into four groups according to their contributions to sound discrimination. First, 40% of neurons showed vigorous responses synchronized to the sound envelope, and could precisely discriminate different sounds. Second, 18% of neurons showed a high preference of ramped to damped sounds, but no preference for modulation rate. They could only discriminate the change of sound envelope. Third, 27% of neurons rapidly adapted to the sound stimuli, had no ability of sound discrimination. Fourth, 15% of neurons discriminated the sounds dependent on the reward-prediction. Comparing to passively listening condition, the activities of putamen neurons were significantly enhanced by the engagement of the auditory tasks, but not modulated by the cat's behavioral choice. The coexistence of multiple types of neurons suggests that the putamen is involved in the transformation from auditory representation to stimulus-reward association. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Single-trial multisensory memories affect later auditory and visual object discrimination.

    Science.gov (United States)

    Thelen, Antonia; Talsma, Durk; Murray, Micah M

    2015-05-01

    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory modality. The possibility of this generalization and the equivalence of effects when memory discrimination was being performed in the visual vs. auditory modality were at the focus of this study. First, we demonstrate that visual object discrimination is affected by the context of prior multisensory encounters, replicating and extending previous findings by controlling for the probability of multisensory contexts during initial as well as repeated object presentations. Second, we provide the first evidence that single-trial multisensory memories impact subsequent auditory object discrimination. Auditory object discrimination was enhanced when initial presentations entailed semantically congruent multisensory pairs and was impaired after semantically incongruent multisensory encounters, compared to sounds that had been encountered only in a unisensory manner. Third, the impact of single-trial multisensory memories upon unisensory object discrimination was greater when the task was performed in the auditory vs. visual modality. Fourth, there was no evidence for correlation between effects of past multisensory experiences on visual and auditory processing, suggestive of largely independent object processing mechanisms between modalities. We discuss these findings in terms of the conceptual short term memory (CSTM) model and predictive coding. Our results suggest differential recruitment and modulation of conceptual memory networks according to the sensory task at hand. Copyright

  6. Relations between perceptual measures of temporal processing, auditory-evoked brainstem responses and speech intelligibility in noise

    DEFF Research Database (Denmark)

    Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten

    2011-01-01

    This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to 1...... kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM......) detection. In addition, auditory brainstem responses (ABRs) to clicks and broadband rising chirps were recorded. Furthermore, speech reception thresholds (SRTs) were determined for Danish sentences in speechshaped noise. The main findings were: (1) SRTs were neither correlated with hearing sensitivity...

  7. Intracranial auditory detection and discrimination potentials as substrates of echoic memory in children.

    Science.gov (United States)

    Liasis, A; Towell, A; Boyd, S

    1999-03-01

    In children, intracranial responses to auditory detection and discrimination processes have not been reported. We, therefore, recorded intracranial event-related potentials (ERPs) to both standard and deviant tones and/or syllables in 4 children undergoing pre-surgical evaluation for epilepsy. ERPs to detection (mean latency = 63 ms) and discrimination (mean latency = 334 ms) were highly localized to areas surrounding the Sylvian fissure (SF). These potentials reflect activation of different neuronal populations and are suggested to contribute to the scalp recorded auditory N1 and mismatch negativity (MMN).

  8. Present and past: Can writing abilities in school children be associated with their auditory discrimination capacities in infancy?

    Science.gov (United States)

    Schaadt, Gesa; Männel, Claudia; van der Meer, Elke; Pannekamp, Ann; Oberecker, Regine; Friederici, Angela D

    2015-12-01

    Literacy acquisition is highly associated with auditory processing abilities, such as auditory discrimination. The event-related potential Mismatch Response (MMR) is an indicator for cortical auditory discrimination abilities and it has been found to be reduced in individuals with reading and writing impairments and also in infants at risk for these impairments. The goal of the present study was to analyze the relationship between auditory speech discrimination in infancy and writing abilities at school age within subjects, and to determine when auditory speech discrimination differences, relevant for later writing abilities, start to develop. We analyzed the MMR registered in response to natural syllables in German children with and without writing problems at two points during development, that is, at school age and at infancy, namely at age 1 month and 5 months. We observed MMR related auditory discrimination differences between infants with and without later writing problems, starting to develop at age 5 months-an age when infants begin to establish language-specific phoneme representations. At school age, these children with and without writing problems also showed auditory discrimination differences, reflected in the MMR, confirming a relationship between writing and auditory speech processing skills. Thus, writing problems at school age are, at least, partly grounded in auditory discrimination problems developing already during the first months of life. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Auditory neurophysiologic responses and discrimination deficits in children with learning problems.

    Science.gov (United States)

    Kraus, N; McGee, T J; Carrell, T D; Zecker, S G; Nicol, T G; Koch, D B

    1996-08-16

    Children with learning problems often cannot discriminate rapid acoustic changes that occur in speech. In this study of normal children and children with learning problems, impaired behavioral discrimination of a rapid speech change (/dalpha/versus/galpha/) was correlated with diminished magnitude of an electrophysiologic measure that is not dependent on attention or a voluntary response. The ability of children with learning problems to discriminate another rapid speech change (/balpha/versus/walpha/) also was reflected in the neurophysiology. These results indicate that some children's discrimination deficits originate in the auditory pathway before conscious perception and have implications for differential diagnosis and targeted therapeutic strategies for children with learning disabilities and attention disorders.

  10. Assessment of spectral and temporal resolution in cochlear implant users using psychoacoustic discrimination and speech cue categorization

    Science.gov (United States)

    Winn, Matthew B.; Won, Jong Ho; Moon, Il Joon

    2016-01-01

    Objectives This study was conducted to measure auditory perception by cochlear implant users in the spectral and temporal domains, using tests of either categorization (using speech-based cues) or discrimination (using conventional psychoacoustic tests). We hypothesized that traditional nonlinguistic tests assessing spectral and temporal auditory resolution would correspond to speech-based measures assessing specific aspects of phonetic categorization assumed to depend on spectral and temporal auditory resolution. We further hypothesized that speech-based categorization performance would ultimately be a superior predictor of speech recognition performance, because of the fundamental nature of speech recognition as categorization. Design Nineteen CI listeners and 10 listeners with normal hearing (NH) participated in a suite of tasks that included spectral ripple discrimination (SRD), temporal modulation detection (TMD), and syllable categorization, which was split into a spectral-cue-based task (targeting the /ba/-/da/ contrast) and a timing-cue-based task (targeting the /b/-/p/ and /d/-/t/ contrasts). Speech sounds were manipulated in order to contain specific spectral or temporal modulations (formant transitions or voice onset time, respectively) that could be categorized. Categorization responses were quantified using logistic regression in order to assess perceptual sensitivity to acoustic phonetic cues. Word recognition testing was also conducted for CI listeners. Results CI users were generally less successful at utilizing both spectral and temporal cues for categorization compared to listeners with normal hearing. For the CI listener group, SRD was significantly correlated with the categorization of formant transitions; both were correlated with better word recognition. TMD using 100 Hz and 10 Hz modulated noise was not correlated with the CI subjects’ categorization of VOT, nor with word recognition. Word recognition was correlated more closely with

  11. Influence of memory, attention, IQ and age on auditory temporal processing tests: preliminary study.

    Science.gov (United States)

    Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane

    2014-01-01

    To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (pAuditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.

  12. Activations in temporal areas using visual and auditory naming stimuli: A language fMRI study in temporal lobe epilepsy.

    Science.gov (United States)

    Gonzálvez, Gloria G; Trimmel, Karin; Haag, Anja; van Graan, Louis A; Koepp, Matthias J; Thompson, Pamela J; Duncan, John S

    2016-12-01

    Verbal fluency functional MRI (fMRI) is used for predicting language deficits after anterior temporal lobe resection (ATLR) for temporal lobe epilepsy (TLE), but primarily engages frontal lobe areas. In this observational study we investigated fMRI paradigms using visual and auditory stimuli, which predominately involve language areas resected during ATLR. Twenty-three controls and 33 patients (20 left (LTLE), 13 right (RTLE)) were assessed using three fMRI paradigms: verbal fluency, auditory naming with a contrast of auditory reversed speech; picture naming with a contrast of scrambled pictures and blurred faces. Group analysis showed bilateral temporal activations for auditory naming and picture naming. Correcting for auditory and visual input (by subtracting activations resulting from auditory reversed speech and blurred pictures/scrambled faces respectively) resulted in left-lateralised activations for patients and controls, which was more pronounced for LTLE compared to RTLE patients. Individual subject activations at a threshold of T>2.5, extent >10 voxels, showed that verbal fluency activated predominantly the left inferior frontal gyrus (IFG) in 90% of LTLE, 92% of RTLE, and 65% of controls, compared to right IFG activations in only 15% of LTLE and RTLE and 26% of controls. Middle temporal (MTG) or superior temporal gyrus (STG) activations were seen on the left in 30% of LTLE, 23% of RTLE, and 52% of controls, and on the right in 15% of LTLE, 15% of RTLE, and 35% of controls. Auditory naming activated temporal areas more frequently than did verbal fluency (LTLE: 93%/73%; RTLE: 92%/58%; controls: 82%/70% (left/right)). Controlling for auditory input resulted in predominantly left-sided temporal activations. Picture naming resulted in temporal lobe activations less frequently than did auditory naming (LTLE 65%/55%; RTLE 53%/46%; controls 52%/35% (left/right)). Controlling for visual input had left-lateralising effects. Auditory and picture naming activated

  13. Somatosensory temporal discrimination is prolonged during migraine attacks.

    Science.gov (United States)

    Boran, H Evren; Cengiz, Bülent; Bolay, Hayrunnisa

    2016-01-01

    Symptoms and signs of sensorial disturbances are characteristic features of a migraine headache. Somatosensory temporal discrimination measures the temporal threshold to perceive two separate somaesthetic stimuli as clearly distinct. This study aimed to evaluate somaesthetic perception in migraine patients by measuring the somatosensory temporal discrimination thresholds. The study included 12 migraine patients without aura and 12 volunteers without headache. Somatosensory temporal discrimination threshold (STDT) values were measured in the face (V3) and hands (C7) during a lateralized headache attack and the headache-free interictal period. The disease duration, pain intensity, phonophobia, photophobia, nausea, vomiting, and brush allodynia were also recorded during the migraine attack. STDT values were within normal limits and not different between the control group and the interictal period in migraine patients. Compared to the headache-free period, STDT values during the attack were significantly prolonged in the contralateral hand (C7) (155.7 ± 84.2 vs 40.6 ± 16.1 ms [P face (V3) (65.5 ± 35.4 vs 37.6 ± 22.2 ms [P = .006]) and ipsilateral face (V3) (104.1 ± 44.5 vs 37.5 ± 21.4 ms [P face were significantly increased compared to that of the ipsilateral hand and contralateral face (155.7 ± 84.2 ms vs 88.6 ± 5.1.3 ms [P = .001], 104.1 ± 44.5 ms vs 65.5 ± 35.4 ms [P = 0.001]). No allodynia was detected in the areas that were tested for somatosensory temporal discrimination. The visual analog scale scores were correlated with the somatosensory temporal discrimination thresholds of the contralateral hand (r = 0.602, P = .038), whereas no correlation was detected between the somatosensory temporal discrimination thresholds and disease duration, brush allodynia in the forehead, phonophobia, photophobia, nausea and vomiting. The study demonstrates for the first time that somatosensory temporal

  14. Auditory temporal resolution and integration - stages of analyzing time-varying sounds

    DEFF Research Database (Denmark)

    Pedersen, Benjamin

    2007-01-01

    . Specifically, the auditory tasks of the described experiments may be considered as falling into two categories: (1) Temporal integration when listeners have to judge the overall loudness of relatively long (compared to the temporal resolution of the auditory system) sounds fluctuating in level, and (2......) temporal pattern recognition where listeners have to identify properties of the actual patterns of level changes. Typically temporal processing is modeled by some sort of temporal summation or integration device. The results of the present experiments are to a large extent incompatible with this modeling...... scheme: Effects such as attention seem to play an important role in loudness integration, and further, it will be demonstrated that the auditory system can rely on temporal cues at a much finer level of detail than predicted be existing models (temporal details in the time-range of 60 ?s can...

  15. Speech sound discrimination training improves auditory cortex responses in a rat model of autism

    Directory of Open Access Journals (Sweden)

    Crystal T Engineer

    2014-08-01

    Full Text Available Children with autism often have language impairments and degraded cortical responses to speech. Extensive behavioral interventions can improve language outcomes and cortical responses. Prenatal exposure to the antiepileptic drug valproic acid (VPA increases the risk for autism and language impairment. Prenatal exposure to VPA also causes weaker and delayed auditory cortex responses in rats. In this study, we document speech sound discrimination ability in VPA exposed rats and document the effect of extensive speech training on auditory cortex responses. VPA exposed rats were significantly impaired at consonant, but not vowel, discrimination. Extensive speech training resulted in both stronger and faster anterior auditory field responses compared to untrained VPA exposed rats, and restored responses to control levels. This neural response improvement generalized to non-trained sounds. The rodent VPA model of autism may be used to improve the understanding of speech processing in autism and contribute to improving language outcomes.

  16. Single-trial multisensory memories affect later auditory and visual object discrimination

    OpenAIRE

    Thelen Antonia; Talsma Durk; Murray Micah M.

    2015-01-01

    Multisensory memory traces established via single-trial exposures can impact subsequent visual object recognition. This impact appears to depend on the meaningfulness of the initial multisensory pairing, implying that multisensory exposures establish distinct object representations that are accessible during later unisensory processing. Multisensory contexts may be particularly effective in influencing auditory discrimination, given the purportedly inferior recognition memory in this sensory ...

  17. Auditory Pattern Recognition and Brief Tone Discrimination of Children with Reading Disorders

    Science.gov (United States)

    Walker, Marianna M.; Givens, Gregg D.; Cranford, Jerry L.; Holbert, Don; Walker, Letitia

    2006-01-01

    Auditory pattern recognition skills in children with reading disorders were investigated using perceptual tests involving discrimination of frequency and duration tonal patterns. A behavioral test battery involving recognition of the pattern of presentation of tone triads was used in which individual components differed in either frequency or…

  18. A Further Evaluation of Picture Prompts during Auditory-Visual Conditional Discrimination Training

    Science.gov (United States)

    Carp, Charlotte L.; Peterson, Sean P.; Arkel, Amber J.; Petursdottir, Anna I.; Ingvarsson, Einar T.

    2012-01-01

    This study was a systematic replication and extension of Fisher, Kodak, and Moore (2007), in which a picture prompt embedded into a least-to-most prompting sequence facilitated acquisition of auditory-visual conditional discriminations. Participants were 4 children who had been diagnosed with autism; 2 had limited prior receptive skills, and 2 had…

  19. Temporal Discrimination: Mechanisms and Relevance to Adult-Onset Dystonia

    Directory of Open Access Journals (Sweden)

    Antonella Conte

    2017-11-01

    Full Text Available Temporal discrimination is the ability to determine that two sequential sensory stimuli are separated in time. For any individual, the temporal discrimination threshold (TDT is the minimum interval at which paired sequential stimuli are perceived as being asynchronous; this can be assessed, with high test–retest and inter-rater reliability, using a simple psychophysical test. Temporal discrimination is disordered in a number of basal ganglia diseases including adult-onset dystonia, of which the two most common phenotypes are cervical dystonia and blepharospasm. The causes of adult-onset focal dystonia are unknown; genetic, epigenetic, and environmental factors are relevant. Abnormal TDTs in adult-onset dystonia are associated with structural and neurophysiological changes considered to reflect defective inhibitory interneuronal processing within a network which includes the superior colliculus, basal ganglia, and primary somatosensory cortex. It is hypothesized that abnormal temporal discrimination is a mediational endophenotype and, when present in unaffected relatives of patients with adult-onset dystonia, indicates non-manifesting gene carriage. Using the mediational endophenotype concept, etiological factors in adult-onset dystonia may be examined including (i the role of environmental exposures in disease penetrance and expression; (ii sexual dimorphism in sex ratios at age of onset; (iii the pathogenesis of non-motor symptoms of adult-onset dystonia; and (iv subcortical mechanisms in disease pathogenesis.

  20. Intact spectral but abnormal temporal processing of auditory stimuli in autism.

    NARCIS (Netherlands)

    Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.

    2009-01-01

    The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with

  1. Non-primary cortical sources of auditory temporal processing

    OpenAIRE

    Darestani Farahani, Ehsan; Wouters, Jan; van Wieringen, Astrid

    2017-01-01

    Auditory information is transmitted to the higher brain centers through the primary and the non-primary auditory pathways. The primary pathway goes from the brainstem, to the midbrain, and then to the thalamus before terminating at the primary auditory cortex. In a parallel pathway, the non-primary pathway initiates at the cochlear nuclei and connects to the reticular formation, a region of the brainstem with interconnected nuclei. These fibers project through reticular formation into the tha...

  2. Observations on auditory learning in amplitude- and frequency-modulation rate discrimination

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.

    2010-01-01

    . One of the key issues when designing such training systems is in the assessment of transfer of learning. In this study we present data on the learning of an auditory task involving sinusoidal amplitude- and frequency-modulated tones. Modulation rate discrimination thresholds were measured during pre...... applications by addressing the transfer of learning across carrier frequency, modulation rate, and modulation type.......Because amplitude- and frequency-modulated sounds can be the basis for the synthesis of many complex sounds, they can be good candidates in the design of training systems aiming at improving the acquisition of perceptual skills that can benefit from information provided via the auditory channel...

  3. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. Individual differences in the discrimination of novel speech sounds: effects of sex, temporal processing, musical and cognitive abilities.

    Science.gov (United States)

    Kempe, Vera; Thoresen, John C; Kirk, Neil W; Schaeffler, Felix; Brooks, Patricia J

    2012-01-01

    This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.

  5. The role of primary auditory and visual cortices in temporal processing: A tDCS approach.

    Science.gov (United States)

    Mioni, G; Grondin, S; Forgione, M; Fracasso, V; Mapelli, D; Stablum, F

    2016-10-15

    Many studies showed that visual stimuli are frequently experienced as shorter than equivalent auditory stimuli. These findings suggest that timing is distributed across many brain areas and that "different clocks" might be involved in temporal processing. The aim of this study is to investigate, with the application of tDCS over V1 and A1, the specific role of primary sensory cortices (either visual or auditory) in temporal processing. Forty-eight University students were included in the study. Twenty-four participants were stimulated over A1 and 24 participants were stimulated over V1. Participants performed time bisection tasks, in the visual and the auditory modalities, involving standard durations lasting 300ms (short) and 900ms (long). When tDCS was delivered over A1, no effect of stimulation was observed on perceived duration but we observed higher temporal variability under anodic stimulation compared to sham and higher variability in the visual compared to the auditory modality. When tDCS was delivered over V1, an under-estimation of perceived duration and higher variability was observed in the visual compared to the auditory modality. Our results showed more variability of visual temporal processing under tDCS stimulation. These results suggest a modality independent role of A1 in temporal processing and a modality specific role of V1 in the processing of temporal intervals in the visual modality. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Temporal Information Processing as a Basis for Auditory Comprehension: Clinical Evidence from Aphasic Patients

    Science.gov (United States)

    Oron, Anna; Szymaszek, Aneta; Szelag, Elzbieta

    2015-01-01

    Background: Temporal information processing (TIP) underlies many aspects of cognitive functions like language, motor control, learning, memory, attention, etc. Millisecond timing may be assessed by sequencing abilities, e.g. the perception of event order. It may be measured with auditory temporal-order-threshold (TOT), i.e. a minimum time gap…

  7. The Role of Visual and Auditory Temporal Processing for Chinese Children with Developmental Dyslexia

    Science.gov (United States)

    Chung, Kevin K. H.; McBride-Chang, Catherine; Wong, Simpson W. L.; Cheung, Him; Penney, Trevor B.; Ho, Connie S. -H.

    2008-01-01

    This study examined temporal processing in relation to Chinese reading acquisition and impairment. The performances of 26 Chinese primary school children with developmental dyslexia on tasks of visual and auditory temporal order judgement, rapid naming, visual-orthographic knowledge, morphological, and phonological awareness were compared with…

  8. Observations on auditory learning in amplitude- and frequency-modulation rate discrimination

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.

    2010-01-01

    Because amplitude- and frequency-modulated sounds can be the basis for the synthesis of many complex sounds, they can be good candidates in the design of training systems aiming at improving the acquisition of perceptual skills that can benefit from information provided via the auditory channel......-training, training, a post-training stages. During training, listeners were divided into two groups; one group trained on amplitude-modulation rate discrimination and the other group trained on frequency-modulation rate discrimination. Results will be discussed in terms of their implications for training...

  9. Cortical Auditory-Evoked Responses in Preterm Neonates: Revisited by Spectral and Temporal Analyses.

    Science.gov (United States)

    Kaminska, A; Delattre, V; Laschet, J; Dubois, J; Labidurie, M; Duval, A; Manresa, A; Magny, J-F; Hovhannisyan, S; Mokhtari, M; Ouss, L; Boissel, A; Hertz-Pannier, L; Sintsov, M; Minlebaev, M; Khazipov, R; Chiron, C

    2017-08-11

    Characteristic preterm EEG patterns of "Delta-brushes" (DBs) have been reported in the temporal cortex following auditory stimuli, but their spatio-temporal dynamics remains elusive. Using 32-electrode EEG recordings and co-registration of electrodes' position to 3D-MRI of age-matched neonates, we explored the cortical auditory-evoked responses (AERs) after 'click' stimuli in 30 healthy neonates aged 30-38 post-menstrual weeks (PMW). (1) We visually identified auditory-evoked DBs within AERs in all the babies between 30 and 33 PMW and a decreasing response rate afterwards. (2) The AERs showed an increase in EEG power from delta to gamma frequency bands over the middle and posterior temporal regions with higher values in quiet sleep and on the right. (3) Time-frequency and averaging analyses showed that the delta component of DBs, which negatively peaked around 550 and 750 ms over the middle and posterior temporal regions, respectively, was superimposed with fast (alpha-gamma) oscillations and corresponded to the late part of the cortical auditory-evoked potential (CAEP), a feature missed when using classical CAEP processing. As evoked DBs rate and AERs delta to alpha frequency power decreased until full term, auditory-evoked DBs are thus associated with the prenatal development of auditory processing and may suggest an early emerging hemispheric specialization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Temporal coordination in joint music performance: effects of endogenous rhythms and auditory feedback.

    Science.gov (United States)

    Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline

    2015-02-01

    Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.

  11. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  12. Task-dependent activations of human auditory cortex during pitch discrimination and pitch memory tasks.

    Science.gov (United States)

    Rinne, Teemu; Koistinen, Sonja; Salonen, Oili; Alho, Kimmo

    2009-10-21

    The functional organization of auditory cortex (AC) is still poorly understood. Previous studies suggest segregation of auditory processing streams for spatial and nonspatial information located in the posterior and anterior AC, respectively (Rauschecker and Tian, 2000; Arnott et al., 2004; Lomber and Malhotra, 2008). Furthermore, previous studies have shown that active listening tasks strongly modulate AC activations (Petkov et al., 2004; Fritz et al., 2005; Polley et al., 2006). However, the task dependence of AC activations has not been systematically investigated. In the present study, we applied high-resolution functional magnetic resonance imaging of the AC and adjacent areas to compare activations during pitch discrimination and n-back pitch memory tasks that were varied parametrically in difficulty. We found that anterior AC activations were increased during discrimination but not during memory tasks, while activations in the inferior parietal lobule posterior to the AC were enhanced during memory tasks but not during discrimination. We also found that wide areas of the anterior AC and anterior insula were strongly deactivated during the pitch memory tasks. While these results are consistent with the proposition that the anterior and posterior AC belong to functionally separate auditory processing streams, our results show that this division is present also between tasks using spatially invariant sounds. Together, our results indicate that activations of human AC are strongly dependent on the characteristics of the behavioral task.

  13. Relation between temporal envelope coding, pitch discrimination, and compression estimates in listeners with sensorineural hearing loss

    DEFF Research Database (Denmark)

    Bianchi, Federica; Santurette, Sébastien; Fereczkowski, Michal

    2015-01-01

    Recent physiological studies in animals showed that noise-induced sensorineural hearing loss (SNHL) increased the amplitude of envelope coding in single auditory-nerve fibers. The present study investigated whether SNHL in human listeners was associated with enhanced temporal envelope coding...... resolvability. For the unresolved conditions, all five HI listeners performed as good as or better than NH listeners with matching musical experience. Two HI listeners showed lower amplitude-modulation detection thresholds than NH listeners for low modulation rates, and one of these listeners also showed a loss......, whether this enhancement affected pitch discrimination performance, and whether loss of compression following SNHL was a potential factor in envelope coding enhancement. Envelope processing was assessed in normal-hearing (NH) and hearing-impaired (HI) listeners in a behavioral amplitude...

  14. Auditory behavior and auditory temporal resolution in children with sleep-disordered breathing.

    Science.gov (United States)

    Leite Filho, Carlos Alberto; Silva, Fábio Ferreira da; Pradella-Hallinan, Márcia; Xavier, Sandra Doria; Miranda, Mônica Carolina; Pereira, Liliane Desgualdo

    2017-06-01

    Intermittent hypoxia caused by obstructive sleep apnea syndrome (OSAS) may lead to damage in brain areas associated to auditory processing. The aim of this study was to compare children with OSAS or primary snoring (PS) to children without sleep-disordered breathing with regard to their performance on the Gaps-in-Noise (GIN) test and the Scale of Auditory Behaviors (SAB) questionnaire. Thirty-seven children (6-12 years old) were submitted to sleep anamnesis and in-lab night-long polysomnography. Three groups were organized according to clinical criteria: OSAS group (13 children), PS group (13 children), and control group (11 children). They were submitted to the GIN test and parents answered SAB questionnaire. The Kruskal-Wallis statistical test was used to compare the groups; p auditory behavior in children. These findings suggest that sleep-disordered breathing may lead to auditory behavior impairment. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effects of Temporal Congruity Between Auditory and Visual Stimuli Using Rapid Audio-Visual Serial Presentation.

    Science.gov (United States)

    An, Xingwei; Tang, Jiabei; Liu, Shuang; He, Feng; Qi, Hongzhi; Wan, Baikun; Ming, Dong

    2016-10-01

    Combining visual and auditory stimuli in event-related potential (ERP)-based spellers gained more attention in recent years. Few of these studies notice the difference of ERP components and system efficiency caused by the shifting of visual and auditory onset. Here, we aim to study the effect of temporal congruity of auditory and visual stimuli onset on bimodal brain-computer interface (BCI) speller. We designed five visual and auditory combined paradigms with different visual-to-auditory delays (-33 to +100 ms). Eleven participants attended in this study. ERPs were acquired and aligned according to visual and auditory stimuli onset, respectively. ERPs of Fz, Cz, and PO7 channels were studied through the statistical analysis of different conditions both from visual-aligned ERPs and audio-aligned ERPs. Based on the visual-aligned ERPs, classification accuracy was also analyzed to seek the effects of visual-to-auditory delays. The latencies of ERP components depended mainly on the visual stimuli onset. Auditory stimuli onsets influenced mainly on early component accuracies, whereas visual stimuli onset determined later component accuracies. The latter, however, played a dominate role in overall classification. This study is important for further studies to achieve better explanations and ultimately determine the way to optimize the bimodal BCI application.

  16. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  17. Lateralization of Auditory rhythm length in temporal lobe lessions

    NARCIS (Netherlands)

    Alpherts, W.C.J.; Vermeulen, J.; Franken, M.L.O.; Hendriks, M.P.H.; Veelen, C.W.M. van; Rijen, P.C. van

    2002-01-01

    In the visual modality, short rhythmic stimuli ha c been proven to he better processed (sequentially) by the left hemisphere, while longer rhythms appear to he better (holistically) processed by the right hemisphere. This study was set up to see it the same holds in the auditory modality. The rhythm

  18. Fast learning of simple perceptual discriminations reduces brain activation in working memory and in high-level auditory regions.

    Science.gov (United States)

    Daikhin, Luba; Ahissar, Merav

    2015-07-01

    Introducing simple stimulus regularities facilitates learning of both simple and complex tasks. This facilitation may reflect an implicit change in the strategies used to solve the task when successful predictions regarding incoming stimuli can be formed. We studied the modifications in brain activity associated with fast perceptual learning based on regularity detection. We administered a two-tone frequency discrimination task and measured brain activation (fMRI) under two conditions: with and without a repeated reference tone. Although participants could not explicitly tell the difference between these two conditions, the introduced regularity affected both performance and the pattern of brain activation. The "No-Reference" condition induced a larger activation in frontoparietal areas known to be part of the working memory network. However, only the condition with a reference showed fast learning, which was accompanied by a reduction of activity in two regions: the left intraparietal area, involved in stimulus retention, and the posterior superior-temporal area, involved in representing auditory regularities. We propose that this joint reduction reflects a reduction in the need for online storage of the compared tones. We further suggest that this change reflects an implicit strategic shift "backwards" from reliance mainly on working memory networks in the "No-Reference" condition to increased reliance on detected regularities stored in high-level auditory networks.

  19. Speech discrimination difficulties in High-Functioning Autism Spectrum Disorder are likely independent of auditory hypersensitivity.

    Directory of Open Access Journals (Sweden)

    William Andrew Dunlop

    2016-08-01

    Full Text Available Autism Spectrum Disorder (ASD, characterised by impaired communication skills and repetitive behaviours, can also result in differences in sensory perception. Individuals with ASD often perform normally in simple auditory tasks but poorly compared to typically developed (TD individuals on complex auditory tasks like discriminating speech from complex background noise. A common trait of individuals with ASD is hypersensitivity to auditory stimulation. No studies to our knowledge consider whether hypersensitivity to sounds is related to differences in speech-in-noise discrimination. We provide novel evidence that individuals with high-functioning ASD show poor performance compared to TD individuals in a speech-in-noise discrimination task with an attentionally demanding background noise, but not in a purely energetic noise. Further, we demonstrate in our small sample that speech-hypersensitivity does not appear to predict performance in the speech-in-noise task. The findings support the argument that an attentional deficit, rather than a perceptual deficit, affects the ability of individuals with ASD to discriminate speech from background noise. Finally, we piloted a novel questionnaire that measures difficulty hearing in noisy environments, and sensitivity to non-verbal and verbal sounds. Psychometric analysis using 128 TD participants provided novel evidence for a difference in sensitivity to non-verbal and verbal sounds, and these findings were reinforced by participants with ASD who also completed the questionnaire. The study was limited by a small and high-functioning sample of participants with ASD. Future work could test larger sample sizes and include lower-functioning ASD participants.

  20. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers.

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences.

  1. Psychophysical Estimates of Frequency Discrimination: More than Just Limitations of Auditory Processing

    Directory of Open Access Journals (Sweden)

    Beate Sabisch

    2013-07-01

    Full Text Available Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ, socioeconomic status (SES and musical experience in predicting frequency discrimination thresholds on each task were assessed using multiple regression analyses. The 2I_6A_X task was more cognitively demanding and hence more susceptible to differences specifically in SES and musical training. Performance on this task did not, however, relate to nonword repetition ability (a measure of language learning capacity. The 3I_2AFC task, by contrast, was only susceptible to musical training. Moreover, thresholds measured using it predicted some variance in nonword repetition performance. This design thus seems suitable for use in studies addressing questions regarding the role of auditory processing in supporting language and literacy development.

  2. Musical Sophistication and the Effect of Complexity on Auditory Discrimination in Finnish Speakers

    Science.gov (United States)

    Dawson, Caitlin; Aalto, Daniel; Šimko, Juraj; Vainio, Martti; Tervaniemi, Mari

    2017-01-01

    Musical experiences and native language are both known to affect auditory processing. The present work aims to disentangle the influences of native language phonology and musicality on behavioral and subcortical sound feature processing in a population of musically diverse Finnish speakers as well as to investigate the specificity of enhancement from musical training. Finnish speakers are highly sensitive to duration cues since in Finnish, vowel and consonant duration determine word meaning. Using a correlational approach with a set of behavioral sound feature discrimination tasks, brainstem recordings, and a musical sophistication questionnaire, we find no evidence for an association between musical sophistication and more precise duration processing in Finnish speakers either in the auditory brainstem response or in behavioral tasks, but they do show an enhanced pitch discrimination compared to Finnish speakers with less musical experience and show greater duration modulation in a complex task. These results are consistent with a ceiling effect set for certain sound features which corresponds to the phonology of the native language, leaving an opportunity for music experience-based enhancement of sound features not explicitly encoded in the language (such as pitch, which is not explicitly encoded in Finnish). Finally, the pattern of duration modulation in more musically sophisticated Finnish speakers suggests integrated feature processing for greater efficiency in a real world musical situation. These results have implications for research into the specificity of plasticity in the auditory system as well as to the effects of interaction of specific language features with musical experiences. PMID:28450829

  3. Examination of the Relation between an Assessment of Skills and Performance on Auditory-Visual Conditional Discriminations for Children with Autism Spectrum Disorder

    Science.gov (United States)

    Kodak, Tiffany; Clements, Andrea; Paden, Amber R.; LeBlanc, Brittany; Mintz, Joslyn; Toussaint, Karen A.

    2015-01-01

    The current investigation evaluated repertoires that may be related to performance on auditory-to-visual conditional discrimination training with 9 students who had been diagnosed with autism spectrum disorder. The skills included in the assessment were matching, imitation, scanning, an auditory discrimination, and a visual discrimination. The…

  4. Temporal distance and discrimination: an audit study in academia.

    Science.gov (United States)

    Milkman, Katherine L; Akinola, Modupe; Chugh, Dolly

    2012-07-01

    Through a field experiment set in academia (with a sample of 6,548 professors), we found that decisions about distant-future events were more likely to generate discrimination against women and minorities (relative to Caucasian males) than were decisions about near-future events. In our study, faculty members received e-mails from fictional prospective doctoral students seeking to schedule a meeting either that day or in 1 week; students' names signaled their race (Caucasian, African American, Hispanic, Indian, or Chinese) and gender. When the requests were to meet in 1 week, Caucasian males were granted access to faculty members 26% more often than were women and minorities; also, compared with women and minorities, Caucasian males received more and faster responses. However, these patterns were essentially eliminated when prospective students requested a meeting that same day. Our identification of a temporal discrimination effect is consistent with the predictions of construal-level theory and implies that subtle contextual shifts can alter patterns of race- and gender-based discrimination.

  5. Opposite Distortions in Interval Timing Perception for Visual and Auditory Stimuli with Temporal Modulations.

    Science.gov (United States)

    Yuasa, Kenichi; Yotsumoto, Yuko

    2015-01-01

    When an object is presented visually and moves or flickers, the perception of its duration tends to be overestimated. Such an overestimation is called time dilation. Perceived time can also be distorted when a stimulus is presented aurally as an auditory flutter, but the mechanisms and their relationship to visual processing remains unclear. In the present study, we measured interval timing perception while modulating the temporal characteristics of visual and auditory stimuli, and investigated whether the interval times of visually and aurally presented objects shared a common mechanism. In these experiments, participants compared the durations of flickering or fluttering stimuli to standard stimuli, which were presented continuously. Perceived durations for auditory flutters were underestimated, while perceived durations of visual flickers were overestimated. When auditory flutters and visual flickers were presented simultaneously, these distortion effects were cancelled out. When auditory flutters were presented with a constantly presented visual stimulus, the interval timing perception of the visual stimulus was affected by the auditory flutters. These results indicate that interval timing perception is governed by independent mechanisms for visual and auditory processing, and that there are some interactions between the two processing systems.

  6. Temporal processing and long-latency auditory evoked potential in stutterers

    Directory of Open Access Journals (Sweden)

    Raquel Prestes

    Full Text Available Abstract Introduction: Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. Objective: To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. Methods: The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n = 20 and non-stutters (n = 21, compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Results: Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Conclusion: Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components.

  7. Spectral and temporal auditory processing in the superior colliculus of aged rats.

    Science.gov (United States)

    Costa, Margarida; Lepore, Franco; Guillemot, Jean-Paul

    2017-09-01

    Presbyacusis reflects dysfunctions present along the central auditory pathway. Given that the topographic representation of the auditory directional spatial map is deteriorated in the superior colliculus of aged animals, therefore, are spectral and temporal auditory processes altered with aging in the rat's superior colliculus? Extracellular single-unit recordings were conducted in the superior colliculus of anesthetized Sprague-Dawley adult (10 months) and aged (22 months) rats. In the spectral domain, level thresholds in aged rats were significantly increased when superior colliculus auditory neurons were stimulated with pure tones or Gaussian noise bursts. The sharpness of the frequency response tuning curve at 10 dB SPL above threshold was also significantly broader among the aged rats. Furthermore, in the temporal domain, the minimal silent gap thresholds to Gaussian noises were significantly longer in aged rats. Hence, these results highlight that spectral and temporal auditory processing in the superior colliculus are impaired during aging. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Prior auditory information shapes visual category-selectivity in ventral occipito-temporal cortex.

    Science.gov (United States)

    Adam, Ruth; Noppeney, Uta

    2010-10-01

    Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.

  9. Temporally selective processing of communication signals by auditory midbrain neurons

    DEFF Research Database (Denmark)

    Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B

    2011-01-01

    Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...

  10. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  11. Auditory Temporal Processing and Working Memory: Two Independent Deficits for Dyslexia

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…

  12. Adaptation to Delayed Speech Feedback Induces Temporal Recalibration between Vocal Sensory and Auditory Modalities

    Directory of Open Access Journals (Sweden)

    Kosuke Yamamoto

    2011-10-01

    Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.

  13. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  14. Pure word deafness with auditory object agnosia after bilateral lesion of the superior temporal sulcus.

    Science.gov (United States)

    Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies

    2015-12-01

    Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Temporal constraints on apparent motion in auditory space.

    Science.gov (United States)

    Lakatos, S

    1993-08-01

    The hypothesis that the extent of spatial separation between successive sound events directly affects the perception of time intervals between these events was tested using an apparent motion paradigm. Subjects listened to four-tone pitch patterns whose individual tones were sounded alternately at one of two loudspeaker positions, and they adjusted the alternation rate until they could no longer distinguish the four-tone ordering of the pattern. Four horizontal and two vertical loudspeaker separations were tested. Results indicate a direct relation between horizontal separation and the critical stimulus onset asynchrony (SOA) between successive tones within a pattern. At the critical SOA, subjects reported hearing not a four-tone pattern, but two pairs of two-note groups overlapping in time. The findings are discussed in the context of auditory spatial processing mechanisms and possible sensory-specific representational constraints.

  16. Local field potential correlates of auditory working memory in primate dorsal temporal pole.

    Science.gov (United States)

    Bigelow, James; Ng, Chi-Wing; Poremba, Amy

    2016-06-01

    Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special

  17. Large cross-sectional study of presbycusis reveals rapid progressive decline in auditory temporal acuity.

    Science.gov (United States)

    Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A

    2016-07-01

    The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Encoding of natural sounds at multiple spectral and temporal resolutions in the human auditory cortex.

    Directory of Open Access Journals (Sweden)

    Roberta Santoro

    2014-01-01

    Full Text Available Functional neuroimaging research provides detailed observations of the response patterns that natural sounds (e.g. human voices and speech, animal cries, environmental sounds evoke in the human brain. The computational and representational mechanisms underlying these observations, however, remain largely unknown. Here we combine high spatial resolution (3 and 7 Tesla functional magnetic resonance imaging (fMRI with computational modeling to reveal how natural sounds are represented in the human brain. We compare competing models of sound representations and select the model that most accurately predicts fMRI response patterns to natural sounds. Our results show that the cortical encoding of natural sounds entails the formation of multiple representations of sound spectrograms with different degrees of spectral and temporal resolution. The cortex derives these multi-resolution representations through frequency-specific neural processing channels and through the combined analysis of the spectral and temporal modulations in the spectrogram. Furthermore, our findings suggest that a spectral-temporal resolution trade-off may govern the modulation tuning of neuronal populations throughout the auditory cortex. Specifically, our fMRI results suggest that neuronal populations in posterior/dorsal auditory regions preferably encode coarse spectral information with high temporal precision. Vice-versa, neuronal populations in anterior/ventral auditory regions preferably encode fine-grained spectral information with low temporal precision. We propose that such a multi-resolution analysis may be crucially relevant for flexible and behaviorally-relevant sound processing and may constitute one of the computational underpinnings of functional specialization in auditory cortex.

  19. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  20. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  1. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  2. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  3. Assessing the effects of temporal coherence on auditory stream formation through comodulation masking release

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Oxenham, Andrew J.

    2014-01-01

    , based on comodulation masking release (CMR), to assess the conditions under which a loss of temporal coherence across frequency can lead to auditory stream segregation. The measure relies on the assumption that the CMR, produced by flanking bands remote from the masker and target frequency, only occurs...... if the masking and flanking bands form part of the same perceptual stream. The masking and flanking bands consisted of sequences of narrowband noise bursts, and the temporal coherence between the masking and flanking bursts was manipulated in two ways: (a) By introducing a fixed temporal offset between...... the flanking and masking bands that varied from zero to 60 ms and (b) by presenting the flanking and masking bursts at different temporal rates, so that the asynchronies varied from burst to burst. The results showed reduced CMR in all conditions where the flanking and masking bands were temporally incoherent...

  4. Importance of the left auditory areas in chord discrimination in music experts as demonstrated by MEG.

    Science.gov (United States)

    Tervaniemi, Mari; Sannemann, Christian; Noyranen, Maiju; Salonen, Johanna; Pihko, Elina

    2011-08-01

    The brain basis behind musical competence in its various forms is not yet known. To determine the pattern of hemispheric lateralization during sound-change discrimination, we recorded the magnetic counterpart of the electrical mismatch negativity (MMNm) responses in professional musicians, musical participants (with high scores in the musicality tests but without professional training in music) and non-musicians. While watching a silenced video, they were presented with short sounds with frequency and duration deviants and C major chords with C minor chords as deviants. MMNm to chord deviants was stronger in both musicians and musical participants than in non-musicians, particularly in their left hemisphere. No group differences were obtained in the MMNm strength in the right hemisphere in any of the conditions or in the left hemisphere in the case of frequency or duration deviants. Thus, in addition to professional training in music, musical aptitude (combined with lower-level musical training) is also reflected in brain functioning related to sound discrimination. The present magnetoencephalographic evidence therefore indicates that the sound discrimination abilities may be differentially distributed in the brain in musically competent and naïve participants, especially in a musical context established by chord stimuli: the higher forms of musical competence engage both auditory cortices in an integrative manner. © 2011 The Authors. European Journal of Neuroscience © 2011 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  5. Spatial and Temporal High Processing of Visual and Auditory Stimuli in Cervical Dystonia.

    Science.gov (United States)

    Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo

    2017-01-01

    Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.

  6. Auditory Temporal Information Processing in Preschool Children at Family Risk for Dyslexia: Relations with Phonological Abilities and Developing Literacy Skills

    Science.gov (United States)

    Boets, Bart; Wouters, Jan; van Wieringen, Astrid; Ghesquiere, Pol

    2006-01-01

    In this project, the hypothesis of an auditory temporal processing deficit in dyslexia was tested by examining auditory processing in relation to phonological skills in two contrasting groups of five-year-old preschool children, a familial high risk and a familial low risk group. Participants were individually matched for gender, age, non-verbal…

  7. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  8. Increased discriminability of authenticity from multimodal laughter is driven by auditory information.

    Science.gov (United States)

    Lavan, Nadine; McGettigan, Carolyn

    2017-10-01

    We present an investigation of the perception of authenticity in audiovisual laughter, in which we contrast spontaneous and volitional samples and examine the contributions of unimodal affective information to multimodal percepts. In a pilot study, we demonstrate that listeners perceive spontaneous laughs as more authentic than volitional ones, both in unimodal (audio-only, visual-only) and multimodal contexts (audiovisual). In the main experiment, we show that the discriminability of volitional and spontaneous laughter is enhanced for multimodal laughter. Analyses of relationships between affective ratings and the perception of authenticity show that, while both unimodal percepts significantly predict evaluations of audiovisual laughter, it is auditory affective cues that have the greater influence on multimodal percepts. We discuss differences and potential mismatches in emotion signalling through voices and faces, in the context of spontaneous and volitional behaviour, and highlight issues that should be addressed in future studies of dynamic multimodal emotion processing.

  9. A Rapid Assessment of Instructional Strategies to Teach Auditory-Visual Conditional Discriminations to Children with Autism

    Science.gov (United States)

    Kodak, Tiffany; Clements, Andrea; LeBlanc, Brittany

    2013-01-01

    The purpose of the present investigation was to evaluate a rapid assessment procedure to identify effective instructional strategies to teach auditory-visual conditional discriminations to children diagnosed with autism. We replicated and extended previous rapid skills assessments (Lerman, Vorndran, Addison, & Kuhn, 2004) by evaluating the effects…

  10. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    OpenAIRE

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. ...

  11. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  12. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  13. Auditory event-related brain potentials for an early discrimination between normal and pathological brain aging.

    Science.gov (United States)

    Dushanova, Juliana; Christov, Mario

    2013-05-25

    The brain as a system with gradually decreasing resources maximizes its chances by reorganizing neural networks to ensure efficient performance. Auditory event-related potentials were recorded in 28 healthy volunteers comprising 14 young and 14 elderly subjects in auditory discrimination motor task (low frequency tone - right hand movement and high frequency tone - left hand movement). The amplitudes of the sensory event-related potential components (N1, P2) were more pronounced with increasing age for either tone and this effect for P2 amplitude was more pronounced in the frontal region. The latency relationship of N1 between the groups was tone-dependent, while that of P2 was tone-independent with a prominent delay in the elderly group over all brain regions. The amplitudes of the cognitive components (N2, P3) diminished with increasing age and the hemispheric asymmetry of N2 (but not for P3) reduced with increasing age. Prolonged N2 latency with increasing age was widespread for either tone while between-group difference in P3 latency was tone-dependent. High frequency tone stimulation and movement requirements lead to P3 delay in the elderly group. The amplitude difference of the sensory components between the age groups could be due to a general greater alertness, less expressed habituation, or decline in the ability to retreat attentional resources from the stimuli in the elderly group. With aging, a neural circuit reorganization of the brain activity affects the cognitive processes. The approach used in this study is useful for an early discrimination between normal and pathological brain aging for early treatment of cognitive alterations and dementia.

  14. Neural correlates of auditory recognition memory in the primate dorsal temporal pole.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2014-02-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.

  15. Neural correlates of auditory recognition memory in the primate dorsal temporal pole

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany

    2013-01-01

    Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects. PMID:24198324

  16. The role of the temporal pole in modulating primitive auditory memory.

    Science.gov (United States)

    Liu, Zhiliang; Wang, Qian; You, Yu; Yin, Peng; Ding, Hu; Bao, Xiaohan; Yang, Pengcheng; Lu, Hao; Gao, Yayue; Li, Liang

    2016-04-21

    Primitive auditory memory (PAM), which is recognized as the early point in the chain of the transient auditory memory system, faithfully maintains raw acoustic fine-structure signals for up to 20-30 milliseconds. The neural mechanisms underlying PAM have not been reported in the literature. Previous anatomical, brain-imaging, and neurophysiological studies have suggested that the temporal pole (TP), part of the parahippocampal region in the transitional area between perirhinal cortex and superior/inferior temporal gyri, is involved in auditory memories. This study investigated whether the TP plays a role in mediating/modulating PAM. The longest interaural interval (the interaural-delay threshold) for detecting a break in interaural correlation (BIC) embedded in interaurally correlated wideband noises was used to indicate the temporal preservation of PAM and examined in both healthy listeners and patients receiving unilateral anterior temporal lobectomy (ATL, centered on the TP) for treating their temporal lobe epilepsy (TLE). The results showed that patients with ATL were still able to detect the BIC even when an interaural interval was introduced, regardless of which ear was the leading one. However, in patient participants, the group-mean interaural-delay threshold for detecting the BIC under the contralateral-ear-leading (relative to the side of ATL) condition was significantly shorter than that under the ipsilateral-ear-leading condition. The results suggest that although the TP is not essential for integrating binaural signals and mediating the PAM, it plays a role in top-down modulating the PAM of raw acoustic fine-structure signals from the contralateral ear. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Auditory stimulus discrimination recorded in dogs, as indicated by mismatch negativity (MMN).

    Science.gov (United States)

    Howell, Tiffani J; Conduit, Russell; Toukhsati, Samia; Bennett, Pauleen

    2012-01-01

    Dog cognition research tends to rely on behavioural response, which can be confounded by obedience or motivation, as the primary means of indexing dog cognitive abilities. A physiological method of measuring dog cognitive processing would be instructive and could complement behavioural response. Electroencephalogram (EEG) has been used in humans to study stimulus processing, which results in waveforms called event-related potentials (ERPs). One ERP component, mismatch negativity (MMN), is a negative deflection approximately 160-200 ms after stimulus onset, which may be related to change detection from echoic sensory memory. We adapted a minimally invasive technique to record MMN in dogs. Dogs were exposed to an auditory oddball paradigm in which deviant tones (10% probability) were pseudo-randomly interspersed throughout an 8 min sequence of standard tones (90% probability). A significant difference in MMN ERP amplitude was observed after the deviant tone in comparison to the standard tone, t5 = -2.98, p = 0.03. This difference, attributed to discrimination of an unexpected stimulus in a series of expected stimuli, was not observed when both tones occurred 50% of the time, t1 = -0.82, p > 0.05. Dogs showed no evidence of pain or distress at any point. We believe this is the first illustration of MMN in a group of dogs and anticipate that this technique may provide valuable insights in cognitive tasks such as object discrimination. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  19. Prepulse Inhibition of Auditory Cortical Responses in the Caudolateral Superior Temporal Gyrus in Macaca mulatta.

    Science.gov (United States)

    Chen, Zuyue; Parkkonen, Lauri; Wei, Jingkuan; Dong, Jin-Run; Ma, Yuanye; Carlson, Synnöve

    2018-04-01

    Prepulse inhibition (PPI) refers to a decreased response to a startling stimulus when another weaker stimulus precedes it. Most PPI studies have focused on the physiological startle reflex and fewer have reported the PPI of cortical responses. We recorded local field potentials (LFPs) in four monkeys and investigated whether the PPI of auditory cortical responses (alpha, beta, and gamma oscillations and evoked potentials) can be demonstrated in the caudolateral belt of the superior temporal gyrus (STGcb). We also investigated whether the presence of a conspecific, which draws attention away from the auditory stimuli, affects the PPI of auditory cortical responses. The PPI paradigm consisted of Pulse-only and Prepulse + Pulse trials that were presented randomly while the monkey was alone (ALONE) and while another monkey was present in the same room (ACCOMP). The LFPs to the Pulse were significantly suppressed by the Prepulse thus, demonstrating PPI of cortical responses in the STGcb. The PPI-related inhibition of the N1 amplitude of the evoked responses and cortical oscillations to the Pulse were not affected by the presence of a conspecific. In contrast, gamma oscillations and the amplitude of the N1 response to Pulse-only were suppressed in the ACCOMP condition compared to the ALONE condition. These findings demonstrate PPI in the monkey STGcb and suggest that the PPI of auditory cortical responses in the monkey STGcb is a pre-attentive inhibitory process that is independent of attentional modulation.

  20. Visual Speech Alters the Discrimination and Identification of Non-Intact Auditory Speech in Children with Hearing Loss

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F.; McAlpine, Rachel P.; Abdi, Hervé

    2017-01-01

    Objectives Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Methods Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/–B/aa or /–B/az). The items started with an easy-to-speechread /B/ or difficult-to-speechread /G/ onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/–B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same—as opposed to different—responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g., /–B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz—as opposed to az— responses in the audiovisual than auditory mode. Results

  1. Visual speech alters the discrimination and identification of non-intact auditory speech in children with hearing loss.

    Science.gov (United States)

    Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé

    2017-03-01

    Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. Performance in the audiovisual mode showed more same

  2. Temporal precision and the capacity of auditory-verbal short-term memory.

    Science.gov (United States)

    Gilbert, Rebecca A; Hitch, Graham J; Hartley, Tom

    2017-12-01

    The capacity of serially ordered auditory-verbal short-term memory (AVSTM) is sensitive to the timing of the material to be stored, and both temporal processing and AVSTM capacity are implicated in the development of language. We developed a novel "rehearsal-probe" task to investigate the relationship between temporal precision and the capacity to remember serial order. Participants listened to a sub-span sequence of spoken digits and silently rehearsed the items and their timing during an unfilled retention interval. After an unpredictable delay, a tone prompted report of the item being rehearsed at that moment. An initial experiment showed cyclic distributions of item responses over time, with peaks preserving serial order and broad, overlapping tails. The spread of the response distributions increased with additional memory load and correlated negatively with participants' auditory digit spans. A second study replicated the negative correlation and demonstrated its specificity to AVSTM by controlling for differences in visuo-spatial STM and nonverbal IQ. The results are consistent with the idea that a common resource underpins both the temporal precision and capacity of AVSTM. The rehearsal-probe task may provide a valuable tool for investigating links between temporal processing and AVSTM capacity in the context of speech and language abilities.

  3. Effects of sound intensity on temporal properties of inhibition in the pallid bat auditory cortex

    Directory of Open Access Journals (Sweden)

    Khaleel A Razak

    2013-06-01

    Full Text Available Auditory neurons in bats that use frequency modulated (FM sweeps for echolocation are selective for the behaviorally-relevant rates and direction of frequency change. Such selectivity arises through spectrotemporal interactions between excitatory and inhibitory components of the receptive field. In the pallid bat auditory system, the relationship between FM sweep direction/rate selectivity and spectral and temporal properties of sideband inhibition have been characterized. Of note is the temporal asymmetry in sideband inhibition, with low-frequency inhibition (LFI exhibiting faster arrival times compared to high-frequency inhibition (HFI. Using the two-tone inhibition over time stimulus paradigm, this study investigated the interactions between two sound parameters in shaping sideband inhibition: intensity and time. Specifically, the impact of changing relative intensities of the excitatory and inhibitory tones on arrival time of inhibition was studied. Using this stimulation paradigm, single unit data from the auditory cortex of pentobarbital-anesthetized cortex show that the threshold for LFI is on average ~8 dB lower than HFI. For equal intensity tones near threshold, LFI is stronger than HFI. When the inhibitory tone intensity is increased further from threshold, the strength asymmetry decreased. The temporal asymmetry in LFI versus HFI arrival time is strongest when the excitatory and inhibitory tones are of equal intensities or if excitatory tone is louder. As inhibitory tone intensity is increased, temporal asymmetry decreased suggesting that the relative magnitude of excitatory and inhibitory inputs shape arrival time of inhibition and FM sweep rate and direction selectivity. Given that most FM bats use downward sweeps as echolocation calls, a similar asymmetry in threshold and strength of LFI versus HFI may be a general adaptation to enhance direction selectivity while maintaining sweep-rate selective responses to downward sweeps.

  4. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    dissertation, Institute for Perception Research, Eindhoven, NL, (1975)]. The same model also accounts for the perceptual grouping of distant spectral components in the case of synchronous presentation. The most essential components of the front-end and back-end processing in the framework of the presented......The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422......-438, (2008)] was used as a front end of a model for auditory stream segregation. A temporal coherence analysis [M. Elhilali, C. Ling, C. Micheyl, A. J. Oxenham and S. Shamma, Neuron. 61, 317-329, (2009)] was applied at the output of the preprocessing, using the coherence across tonotopic channels to group...

  5. Effects of tonotopicity, adaptation, modulation tuning, and temporal coherence in “primitive” auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2014-01-01

    The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892–2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al......., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset...

  6. The third-stimulus temporal discrimination threshold: focusing on the temporal processing of sensory input within primary somatosensory cortex.

    Science.gov (United States)

    Leodori, Giorgio; Formica, Alessandra; Zhu, Xiaoying; Conte, Antonella; Belvisi, Daniele; Cruccu, Giorgio; Hallett, Mark; Berardelli, Alfredo

    2017-10-01

    The somatosensory temporal discrimination threshold (STDT) has been used in recent years to investigate time processing of sensory information, but little is known about the physiological correlates of somatosensory temporal discrimination. The objective of this study was to investigate whether the time interval required to discriminate between two stimuli varies according to the number of stimuli in the task. We used the third-stimulus temporal discrimination threshold (ThirdDT), defined as the shortest time interval at which an individual distinguishes a third stimulus following a pair of stimuli delivered at the STDT. The STDT and ThirdDT were assessed in 31 healthy subjects. In a subgroup of 10 subjects, we evaluated the effects of the stimuli intensity on the ThirdDT. In a subgroup of 16 subjects, we evaluated the effects of S1 continuous theta-burst stimulation (S1-cTBS) on the STDT and ThirdDT. Results show that ThirdDT is shorter than STDT. We found a positive correlation between STDT and ThirdDT values. As long as the stimulus intensity was within the perceivable and painless range, it did not affect ThirdDT values. S1-cTBS significantly affected both STDT and ThirdDT, although the latter was affected to a greater extent and for a longer period of time. We conclude that the interval needed to discriminate between time-separated tactile stimuli is related to the number of stimuli used in the task. STDT and ThirdDT are encoded in S1, probably by a shared tactile temporal encoding mechanism whose performance rapidly changes during the perception process. ThirdDT is a new method to measure somatosensory temporal discrimination. NEW & NOTEWORTHY To investigate whether the time interval required to discriminate between stimuli varies according to changes in the stimulation pattern, we used the third-stimulus temporal discrimination threshold (ThirdDT). We found that the somatosensory temporal discrimination acuity varies according to the number of stimuli in the

  7. Temporal Dynamics in Auditory Perceptual Learning: Impact of Sequencing and Incidental Learning

    Science.gov (United States)

    Church, Barbara A.; Mercado, Eduardo, III; Wisniewski, Matthew G.; Liu, Estella H.

    2013-01-01

    Training can improve perceptual sensitivities. We examined whether the temporal dynamics and the incidental versus intentional nature of training are important. Within the context of a birdsong rate discrimination task, we examined whether the sequencing of pretesting exposure to the stimuli mattered. Easy-to-hard (progressive) sequencing of…

  8. The effects of postnatal phthalate exposure on the development of auditory temporal processing in rats.

    Science.gov (United States)

    Kim, Bong Jik; Kim, Jungyoon; Keoboutdy, Vanhnansy; Kwon, Ho-Jang; Oh, Seung-Ha; Jung, Jae Yun; Park, Il Yong; Paik, Ki Chung

    2017-06-01

    The central auditory pathway is known to continue its development during the postnatal critical periods and is shaped by experience and sensory inputs. Phthalate, a known neurotoxic material, has been reported to be associated with attention deficits in children, impacting many infant neurobehaviors. The objective of this study was to investigate the potential effects of neonatal phthalate exposure on the development of auditory temporal processing. Neonatal Sprague-Dawley rats were randomly assigned into two groups: The phthalate group (n = 6), and the control group (n = 6). Phthalate was given once per day from postnatal day 8 (P8) to P28. Upon completion, at P28, the Auditory Brainstem Response (ABR) and Gap Prepulse Inhibition of Acoustic Startle response (GPIAS) at each gap duration (2, 5, 10, 20, 50 and 80 ms) were measured, and gap detection threshold (GDT) was calculated. These outcomes were compared between the two groups. Hearing thresholds by ABR showed no significant differences at all frequencies between the two groups. Regarding GPIAS, no significant difference was observed, except at a gap duration of 20 ms (p = 0.037). The mean GDT of the phthalate group (44.0 ms) was higher than that of the control group (20.0 ms), but without statistical significance (p = 0.065). Moreover, the phthalate group tended to demonstrate more of a scattered distribution in the GDT group than the in the control group. Neonatal phthalate exposure may disrupt the development of auditory temporal processing in rats. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Temporal and identity prediction in visual-auditory events: Electrophysiological evidence from stimulus omissions.

    Science.gov (United States)

    van Laarhoven, Thijs; Stekelenburg, Jeroen J; Vroomen, Jean

    2017-04-15

    A rare omission of a sound that is predictable by anticipatory visual information induces an early negative omission response (oN1) in the EEG during the period of silence where the sound was expected. It was previously suggested that the oN1 was primarily driven by the identity of the anticipated sound. Here, we examined the role of temporal prediction in conjunction with identity prediction of the anticipated sound in the evocation of the auditory oN1. With incongruent audiovisual stimuli (a video of a handclap that is consistently combined with the sound of a car horn) we demonstrate in Experiment 1 that a natural match in identity between the visual and auditory stimulus is not required for inducing the oN1, and that the perceptual system can adapt predictions to unnatural stimulus events. In Experiment 2 we varied either the auditory onset (relative to the visual onset) or the identity of the sound across trials in order to hamper temporal and identity predictions. Relative to the natural stimulus with correct auditory timing and matching audiovisual identity, the oN1 was abolished when either the timing or the identity of the sound could not be predicted reliably from the video. Our study demonstrates the flexibility of the perceptual system in predictive processing (Experiment 1) and also shows that precise predictions of timing and content are both essential elements for inducing an oN1 (Experiment 2). Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Differential sensory cortical involvement in auditory and visual sensorimotor temporal recalibration: Evidence from transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Aytemür, Ali; Almeida, Nathalia; Lee, Kwang-Hyuk

    2017-02-01

    Adaptation to delayed sensory feedback following an action produces a subjective time compression between the action and the feedback (temporal recalibration effect, TRE). TRE is important for sensory delay compensation to maintain a relationship between causally related events. It is unclear whether TRE is a sensory modality-specific phenomenon. In 3 experiments employing a sensorimotor synchronization task, we investigated this question using cathodal transcranial direct-current stimulation (tDCS). We found that cathodal tDCS over the visual cortex, and to a lesser extent over the auditory cortex, produced decreased visual TRE. However, both auditory and visual cortex tDCS did not produce any measurable effects on auditory TRE. Our study revealed different nature of TRE in auditory and visual domains. Visual-motor TRE, which is more variable than auditory TRE, is a sensory modality-specific phenomenon, modulated by the auditory cortex. The robustness of auditory-motor TRE, unaffected by tDCS, suggests the dominance of the auditory system in temporal processing, by providing a frame of reference in the realignment of sensorimotor timing signals. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance...... models of complex modulation processing in the human auditory system....

  12. Auditory-somatosensory temporal sensitivity improves when the somatosensory event is caused by voluntary body movement

    Directory of Open Access Journals (Sweden)

    Norimichi Kitagawa

    2016-12-01

    Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.

  13. Infants Discriminate Voicing and Place of Articulation with Reduced Spectral and Temporal Modulation Cues

    Science.gov (United States)

    Cabrera, Laurianne; Lorenzi, Christian; Bertoncini, Josiane

    2015-01-01

    Purpose: This study assessed the role of spectro-temporal modulation cues in the discrimination of 2 phonetic contrasts (voicing and place) for young infants. Method: A visual-habituation procedure was used to assess the ability of French-learning 6-month-old infants with normal hearing to discriminate voiced versus unvoiced (/aba/-/apa/) and…

  14. Auditory discrimination of voice-onset time and its relationship with reading ability.

    Science.gov (United States)

    Arciuli, Joanne; Rankine, Tracey; Monaghan, Padraic

    2010-05-01

    The perception of voice-onset time (VOT) during dichotic listening provides unique insight regarding auditory discrimination processes and, as such, an opportunity to learn more about individual differences in reading ability. We analysed the responses elicited by four VOT conditions: short-long pairs (SL), where a syllable with a short VOT was presented to the left ear and a syllable with a long VOT was presented to the right ear, as well as long-short (LS), short-short (SS), and long-long (LL) pairs. Stimuli were presented in three attention conditions, where participants were instructed to attend to either the left or right ear, or received no instruction. By around 9.5 years of age children perform similarly to adults in terms of the size and relative magnitude of the right ear advantage (REA) elicited by each of the four VOT conditions. Overall, SL pairs elicited the largest REA and LS pairs elicited a left ear advantage (LEA), reflecting stimulus-driven bottom-up processes. However, children were less able to modulate their responses according to attention condition, reflecting a lack of top-down control. Effective direction of attention to one ear or the other was related to measures of reading accuracy and comprehension, indicating that reading skill is associated with top-down control of bottom-up perceptual processes.

  15. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  16. The role of temporal structure in the investigation of sensory memory, auditory scene analysis, and speech perception: a healthy-aging perspective.

    Science.gov (United States)

    Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David

    2015-02-01

    Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Evidence for a neurophysiologic auditory deficit in children with benign epilepsy with centro-temporal spikes.

    Science.gov (United States)

    Liasis, A; Bamiou, D E; Boyd, S; Towell, A

    2006-07-01

    Benign focal epilepsy in childhood with centro-temporal spikes (BECTS) is one of the most common forms of epilepsy. Recent studies have questioned the benign nature of BECTS, as they have revealed neuropsychological deficits in many domains including language. The aim of this study was to investigate whether the epileptic discharges during the night have long-term effects on auditory processing, as reflected on electrophysiological measures, during the day, which could underline the language deficits. In order to address these questions we recorded base line electroencephalograms (EEG), sleep EEG and auditory event related potentials in 12 children with BECTS and in age- and gender-matched controls. In the children with BECTS, 5 had unilateral and 3 had bilateral spikes. In the 5 patients with unilateral spikes present during sleep, an asymmetry of the auditory event related component (P85-120) was observed contralateral to the side of epileptiform activity compared to the normal symmetrical vertex distribution that was noted in all controls and in 3 the children with bilateral spikes. In all patients the peak to peak amplitude of this event related potential component was statistically greater compared to the controls. Analysis of subtraction waveforms (deviant - standard) revealed no evidence of a mismatch negativity component in any of the children with BECTS. We propose that the abnormality of P85-120 and the absence of mismatch negativity during wake recordings in this group may arise in response to the long-term effects of spikes occurring during sleep, resulting in disruption of the evolution and maintenance of echoic memory traces. These results may indicate that patients with BECTS have abnormal processing of auditory information at a sensory level ipsilateral to the hemisphere evoking spikes during sleep.

  18. A system for the assessment and training of temporal-order discrimination

    Czech Academy of Sciences Publication Activity Database

    Mates, Jiří; von Steinbüchel, N.; Wittman, M.; Treutwein, B.

    2001-01-01

    Roč. 64, č. 2 (2001), s. 125-131 ISSN 0169-2607 R&D Projects: GA ČR GA406/96/1314 Institutional research plan: CEZ:AV0Z5011922 Keywords : temporal-order judgement * training of temporal-order discrimination * computer-aided measurement Subject RIV: ED - Physiology Impact factor: 0.559, year: 2001

  19. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  20. Echoic memory: investigation of its temporal resolution by auditory offset cortical responses.

    Directory of Open Access Journals (Sweden)

    Makoto Nishihara

    Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.

  1. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    Science.gov (United States)

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms. PMID:25170608

  2. An FMRI study of the neural systems involved in visually cued auditory top-down spatial and temporal attention.

    Directory of Open Access Journals (Sweden)

    Chunlin Li

    Full Text Available Top-down attention to spatial and temporal cues has been thoroughly studied in the visual domain. However, because the neural systems that are important for auditory top-down temporal attention (i.e., attention based on time interval cues remain undefined, the differences in brain activity between directed attention to auditory spatial location (compared with time intervals are unclear. Using fMRI (magnetic resonance imaging, we measured the activations caused by cue-target paradigms by inducing the visual cueing of attention to an auditory target within a spatial or temporal domain. Imaging results showed that the dorsal frontoparietal network (dFPN, which consists of the bilateral intraparietal sulcus and the frontal eye field, responded to spatial orienting of attention, but activity was absent in the bilateral frontal eye field (FEF during temporal orienting of attention. Furthermore, the fMRI results indicated that activity in the right ventrolateral prefrontal cortex (VLPFC was significantly stronger during spatial orienting of attention than during temporal orienting of attention, while the DLPFC showed no significant differences between the two processes. We conclude that the bilateral dFPN and the right VLPFC contribute to auditory spatial orienting of attention. Furthermore, specific activations related to temporal cognition were confirmed within the superior occipital gyrus, tegmentum, motor area, thalamus and putamen.

  3. Temporal correlation between auditory neurons and the hippocampal theta rhythm induced by novel stimulations in awake guinea pigs.

    Science.gov (United States)

    Liberman, Tamara; Velluti, Ricardo A; Pedemonte, Marisa

    2009-11-17

    The hippocampal theta rhythm is associated with the processing of sensory systems such as touch, smell, vision and hearing, as well as with motor activity, the modulation of autonomic processes such as cardiac rhythm, and learning and memory processes. The discovery of temporal correlation (phase locking) between the theta rhythm and both visual and auditory neuronal activity has led us to postulate the participation of such rhythm in the temporal processing of sensory information. In addition, changes in attention can modify both the theta rhythm and the auditory and visual sensory activity. The present report tested the hypothesis that the temporal correlation between auditory neuronal discharges in the inferior colliculus central nucleus (ICc) and the hippocampal theta rhythm could be enhanced by changes in sensory stimulation. We presented chronically implanted guinea pigs with auditory stimuli that varied over time, and recorded the auditory response during wakefulness. It was observed that the stimulation shifts were capable of producing the temporal phase correlations between the theta rhythm and the ICc unit firing, and they differed depending on the stimulus change performed. Such correlations disappeared approximately 6 s after the change presentation. Furthermore, the power of the hippocampal theta rhythm increased in half of the cases presented with a stimulation change. Based on these data, we propose that the degree of correlation between the unitary activity and the hippocampal theta rhythm varies with--and therefore may signal--stimulus novelty.

  4. Sporadic adult onset primary torsion dystonia is a genetic disorder by the temporal discrimination test.

    LENUS (Irish Health Repository)

    Kimmich, Okka

    2012-02-01

    Adult-onset primary torsion dystonia is an autosomal dominant disorder with markedly reduced penetrance; patients with sporadic adult-onset primary torsion dystonia are much more prevalent than familial. The temporal discrimination threshold is the shortest time interval at which two stimuli are detected to be asynchronous and has been shown to be abnormal in adult-onset primary torsion dystonia. The aim was to determine the frequency of abnormal temporal discrimination thresholds in patients with sporadic adult-onset primary torsion dystonia and their first-degree relatives. We hypothesized that abnormal temporal discrimination thresholds in first relatives would be compatible with an autosomal dominant endophenotype. Temporal discrimination thresholds were examined in 61 control subjects (39 subjects <50 years of age; 22 subjects >50 years of age), 32 patients with sporadic adult-onset primary torsion dystonia (cervical dystonia n = 30, spasmodic dysphonia n = 1 and Meige\\'s syndrome n = 1) and 73 unaffected first-degree relatives (36 siblings, 36 offspring and one parent) using visual and tactile stimuli. Z-scores were calculated for all subjects; a Z > 2.5 was considered abnormal. Abnormal temporal discrimination thresholds were found in 1\\/61 (2%) control subjects, 27\\/32 (84%) patients with adult-onset primary torsion dystonia and 32\\/73 (44%) unaffected relatives [siblings (20\\/36; 56%), offspring (11\\/36; 31%) and one parent]. When two or more relatives were tested in any one family, 22 of 24 families had at least one first-degree relative with an abnormal temporal discrimination threshold. The frequency of abnormal temporal discrimination thresholds in first-degree relatives of patients with sporadic adult-onset primary torsion dystonia is compatible with an autosomal dominant disorder and supports the hypothesis that apparently sporadic adult-onset primary torsion dystonia is genetic in origin.

  5. Quantifying auditory temporal stability in a large database of recorded music.

    Directory of Open Access Journals (Sweden)

    Robert J Ellis

    Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.

  6. Navigated transcranial magnetic stimulation of the primary somatosensory cortex impairs perceptual processing of tactile temporal discrimination.

    Science.gov (United States)

    Hannula, Henri; Neuvonen, Tuomas; Savolainen, Petri; Tukiainen, Taru; Salonen, Oili; Carlson, Synnöve; Pertovaara, Antti

    2008-05-30

    Previous studies indicate that transcranial magnetic stimulation (TMS) with biphasic pulses applied approximately over the primary somatosensory cortex (S1) suppresses performance in vibrotactile temporal discrimination tasks; these previous results, however, do not allow separating perceptual influence from memory or decision-making. Moreover, earlier studies using external landmarks for directing biphasic TMS pulses to the cortex do not reveal whether the changes in vibrotactile task performance were due to action on S1 or an adjacent area. In the present study, we determined whether the S1 area representing a cutaneous test site is critical for perceptual processing of tactile temporal discrimination. Electrical test pulses were applied to the thenar skin of the hand and the subjects attempted to discriminate single from twin pulses. During discrimination task, monophasic TMS pulses or sham TMS pulses were directed anatomically accurately to the S1 area representing the thenar using magnetic resonance image-guided navigation. The subject's capacity to temporal discrimination was impaired with a decrease in the delay between the TMS pulse and the cutaneous test pulse from 50 to 0 ms. The result indicates that S1 area representing a cutaneous test site is involved in perceptual processing of tactile temporal discrimination.

  7. Discriminability limits in spatio-temporal stereo block matching.

    Science.gov (United States)

    Jain, Ankit K; Nguyen, Truong Q

    2014-05-01

    Disparity estimation is a fundamental task in stereo imaging and is a well-studied problem. Recently, methods have been adapted to the video domain where motion is used as a matching criterion to help disambiguate spatially similar candidates. In this paper, we analyze the validity of the underlying assumptions of spatio-temporal disparity estimation, and determine the extent to which motion aids the matching process. By analyzing the error signal for spatio-temporal block matching under the sum of squared differences criterion and treating motion as a stochastic process, we determine the probability of a false match as a function of image features, motion distribution, image noise, and number of frames in the spatio-temporal patch. This performance quantification provides insight into when spatio-temporal matching is most beneficial in terms of the scene and motion, and can be used as a guide to select parameters for stereo matching algorithms. We validate our results through simulation and experiments on stereo video.

  8. Functional asymmetry in primary auditory cortex for processing musical sounds: temporal pattern analysis of fMRI time series.

    Science.gov (United States)

    Izumi, Shuji; Itoh, Kosuke; Matsuzawa, Hitoshi; Takahashi, Sugata; Kwee, Ingrid L; Nakada, Tsutomu

    2011-07-13

    Hemispheric differences in the temporal processing of musical sounds within the primary auditory cortex were investigated using functional magnetic resonance imaging (fMRI) time series analysis on a 3.0 T system in right-handed individuals who had no formal training in music. The two hemispheres exhibited a clear-cut asymmetry in the time pattern of fMRI signals. A large transient signal component was observed in the left primary auditory cortex immediately after the onset of musical sounds, while only sustained activation, without an initial transient component, was seen in the right primary auditory cortex. The observed difference was believed to reflect differential segmentation in primary auditory cortical sound processing. Although the left primary auditory cortex processed the entire 30-s musical sound stimulus as a single event, the right primary auditory cortex had low-level processing of sounds with multiple segmentations of shorter time scales. The study indicated that musical sounds are processed as 'sounds with contents', similar to how language is processed in the left primary auditory cortex.

  9. Opaque Selling: Static or Inter-Temporal Price Discrimination?

    OpenAIRE

    Courty, Pascal; Liu, Wenyu

    2013-01-01

    We study opaque selling in the hotel industry using data from Hotwire.com. An opaque room discloses only the star level and general location of the hotel at the time of booking. The exact identity of the hotel is disclosed after the booking is completed. Opaque rooms sell at a discount of 40 percent relative to regular rooms. The discount increases when hotels are more differentiated. This finding is consistent with static models of price discrimination. No support was found for predictions s...

  10. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    Science.gov (United States)

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  11. Effects of deafness and cochlear implant use on temporal response characteristics in cat primary auditory cortex.

    Science.gov (United States)

    Fallon, James B; Shepherd, Robert K; Nayagam, David A X; Wise, Andrew K; Heffer, Leon F; Landry, Thomas G; Irvine, Dexter R F

    2014-09-01

    We have previously shown that neonatal deafness of 7-13 months duration leads to loss of cochleotopy in the primary auditory cortex (AI) that can be reversed by cochlear implant use. Here we describe the effects of a similar duration of deafness and cochlear implant use on temporal processing. Specifically, we compared the temporal resolution of neurons in AI of young adult normal-hearing cats that were acutely deafened and implanted immediately prior to recording with that in three groups of neonatally deafened cats. One group of neonatally deafened cats received no chronic stimulation. The other two groups received up to 8 months of either low- or high-rate (50 or 500 pulses per second per electrode, respectively) stimulation from a clinical cochlear implant, initiated at 10 weeks of age. Deafness of 7-13 months duration had no effect on the duration of post-onset response suppression, latency, latency jitter, or the stimulus repetition rate at which units responded maximally (best repetition rate), but resulted in a statistically significant reduction in the ability of units to respond to every stimulus in a train (maximum following rate). None of the temporal response characteristics of the low-rate group differed from those in acutely deafened controls. In contrast, high-rate stimulation had diverse effects: it resulted in decreased suppression duration, longer latency and greater jitter relative to all other groups, and an increase in best repetition rate and cut-off rate relative to acutely deafened controls. The minimal effects of moderate-duration deafness on temporal processing in the present study are in contrast to its previously-reported pronounced effects on cochleotopy. Much longer periods of deafness have been reported to result in significant changes in temporal processing, in accord with the fact that duration of deafness is a major factor influencing outcome in human cochlear implantees. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. A neural circuit transforming temporal periodicity information into a rate-based representation in the mammalian auditory system

    DEFF Research Database (Denmark)

    Dicke, Ulrike; Ewert, Stephan D.; Dau, Torsten

    2007-01-01

    Periodic amplitude modulations AMs of an acoustic stimulus are presumed to be encoded in temporal activity patterns of neurons in the cochlear nucleus. Physiological recordings indicate that this temporal AM code is transformed into a rate-based periodicity code along the ascending auditory pathw...... accounts for the encoding of AM depth over a large dynamic range and for modulation frequency selective processing of complex sounds....

  13. Auditory Temporal Structure Processing in Dyslexia: Processing of Prosodic Phrase Boundaries Is Not Impaired in Children with Dyslexia

    Science.gov (United States)

    Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.

    2014-01-01

    Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…

  14. Auditory and Visual Modulation of Temporal Lobe Neurons in Voice-Sensitive and Association Cortices

    Science.gov (United States)

    Perrodin, Catherine; Kayser, Christoph; Logothetis, Nikos K.

    2014-01-01

    Effective interactions between conspecific individuals can depend upon the receiver forming a coherent multisensory representation of communication signals, such as merging voice and face content. Neuroimaging studies have identified face- or voice-sensitive areas (Belin et al., 2000; Petkov et al., 2008; Tsao et al., 2008), some of which have been proposed as candidate regions for face and voice integration (von Kriegstein et al., 2005). However, it was unclear how multisensory influences occur at the neuronal level within voice- or face-sensitive regions, especially compared with classically defined multisensory regions in temporal association cortex (Stein and Stanford, 2008). Here, we characterize auditory (voice) and visual (face) influences on neuronal responses in a right-hemisphere voice-sensitive region in the anterior supratemporal plane (STP) of Rhesus macaques. These results were compared with those in the neighboring superior temporal sulcus (STS). Within the STP, our results show auditory sensitivity to several vocal features, which was not evident in STS units. We also newly identify a functionally distinct neuronal subpopulation in the STP that appears to carry the area's sensitivity to voice identity related features. Audiovisual interactions were prominent in both the STP and STS. However, visual influences modulated the responses of STS neurons with greater specificity and were more often associated with congruent voice-face stimulus pairings than STP neurons. Together, the results reveal the neuronal processes subserving voice-sensitive fMRI activity patterns in primates, generate hypotheses for testing in the visual modality, and clarify the position of voice-sensitive areas within the unisensory and multisensory processing hierarchies. PMID:24523543

  15. Population responses in primary auditory cortex simultaneously represent the temporal envelope and periodicity features in natural speech.

    Science.gov (United States)

    Abrams, Daniel A; Nicol, Trent; White-Schwoch, Travis; Zecker, Steven; Kraus, Nina

    2017-05-01

    Speech perception relies on a listener's ability to simultaneously resolve multiple temporal features in the speech signal. Little is known regarding neural mechanisms that enable the simultaneous coding of concurrent temporal features in speech. Here we show that two categories of temporal features in speech, the low-frequency speech envelope and periodicity cues, are processed by distinct neural mechanisms within the same population of cortical neurons. We measured population activity in primary auditory cortex of anesthetized guinea pig in response to three variants of a naturally produced sentence. Results show that the envelope of population responses closely tracks the speech envelope, and this cortical activity more closely reflects wider bandwidths of the speech envelope compared to narrow bands. Additionally, neuronal populations represent the fundamental frequency of speech robustly with phase-locked responses. Importantly, these two temporal features of speech are simultaneously observed within neuronal ensembles in auditory cortex in response to clear, conversation, and compressed speech exemplars. Results show that auditory cortical neurons are adept at simultaneously resolving multiple temporal features in extended speech sentences using discrete coding mechanisms. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Decreased middle temporal gyrus connectivity in the language network in schizophrenia patients with auditory verbal hallucinations.

    Science.gov (United States)

    Zhang, Linchuan; Li, Baojuan; Wang, Huaning; Li, Liang; Liao, Qimei; Liu, Yang; Bao, Xianghong; Liu, Wenlei; Yin, Hong; Lu, Hongbing; Tan, Qingrong

    2017-07-13

    As the most common symptoms of schizophrenia, the long-term persistence of obstinate auditory verbal hallucinations (AVHs) brings about great mental pain to patients. Neuroimaging studies of schizophrenia have indicated that AVHs were associated with altered functional and structural connectivity within the language network. However, effective connectivity that could reflect directed information flow within this network and is of great importance to understand the neural mechanisms of the disorder remains largely unknown. In this study, we utilized stochastic dynamic causal modeling (DCM) to investigate directed connections within the language network in schizophrenia patients with and without AVHs. Thirty-six patients with schizophrenia (18 with AVHs and 18 without AVHs), and 37 healthy controls participated in the current resting-state functional magnetic resonance imaging (fMRI) study. The results showed that the connection from the left inferior frontal gyrus (LIFG) to left middle temporal gyrus (LMTG) was significantly decreased in patients with AVHs compared to those without AVHs. Meanwhile, the effective connection from the left inferior parietal lobule (LIPL) to LMTG was significantly decreased compared to the healthy controls. Our findings suggest aberrant pattern of causal interactions within the language network in patients with AVHs, indicating that the hypoconnectivity or disrupted connection from frontal to temporal speech areas might be critical for the pathological basis of AVHs. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The effect of delayed auditory feedback on activity in the temporal lobe while speaking: a positron emission tomography study.

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard Js; Scott, Sophie K

    2010-04-01

    Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Positron emission tomography was used to image regional cerebral blood flow changes, an index of neural activity, and to assess the influence of increasing amounts of delay. Delayed auditory feedback led to increased activation in the bilateral superior temporal lobes, extending into posterior-medial auditory areas. Similar peaks in the temporal lobe were sensitive to increases in the amount of delay. A single peak in the temporal parietal junction responded to the amount of delay but not to the presence of a delay (relative to no delay). This study permitted distinctions to be made between the neural response to hearing one's voice at a delay and the neural activity that correlates with this delay. Notably, all the peaks showed some influence of the amount of delay. This result confirms a role for the posterior, sensorimotor "how" system in the production of speech under conditions of delayed auditory feedback.

  18. High frequency repetitive sensory stimulation improves temporal discrimination in healthy subjects.

    Science.gov (United States)

    Erro, Roberto; Rocchi, Lorenzo; Antelmi, Elena; Palladino, Raffaele; Tinazzi, Michele; Rothwell, John; Bhatia, Kailash P

    2016-01-01

    High frequency electrical stimulation of an area of skin on a finger improves two-point spatial discrimination in the stimulated area, likely depending on plastic changes in the somatosensory cortex. However, it is unknown whether improvement also applies to temporal discrimination. Twelve young and ten elderly volunteers underwent the stimulation protocol onto the palmar skin of the right index finger. Somatosensory temporal discrimination threshold (STDT) was evaluated before and immediately after stimulation as well as 2.5h and 24h later. There was a significant reduction in somatosensory temporal threshold only on the stimulated finger. The effect was reversible, with STDT returning to the baseline values within 24h, and was smaller in the elderly than in the young participants. High frequency stimulation of the skin focally improves temporal discrimination in the area of stimulation. Given previous suggestions that the perceptual effects rely on plastic changes in the somatosensory cortex, our results are consistent with the idea that the timing of sensory stimuli is, at least partially, encoded in the primary somatosensory cortex. Such a protocol could potentially be used as a therapeutic intervention to ameliorate physiological decline in the elderly or in other disorders of sensorimotor integration. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  19. Measurement & Analysis of the Temporal Discrimination Threshold Applied to Cervical Dystonia.

    Science.gov (United States)

    Beck, Rebecca B; McGovern, Eavan M; Butler, John S; Birsanu, Dorina; Quinlivan, Brendan; Beiser, Ines; Narasimham, Shruti; O'Riordan, Sean; Hutchinson, Michael; Reilly, Richard B

    2018-01-27

    The temporal discrimination threshold (TDT) is the shortest time interval at which an observer can discriminate two sequential stimuli as being asynchronous (typically 30-50 ms). It has been shown to be abnormal (prolonged) in neurological disorders, including cervical dystonia, a phenotype of adult onset idiopathic isolated focal dystonia. The TDT is a quantitative measure of the ability to perceive rapid changes in the environment and is considered indicative of the behavior of the visual neurons in the superior colliculus, a key node in covert attentional orienting. This article sets out methods for measuring the TDT (including two hardware options and two modes of stimuli presentation). We also explore two approaches of data analysis and TDT calculation. The application of the assessment of temporal discrimination to the understanding of the pathogenesis of cervical dystonia and adult onset idiopathic isolated focal dystonia is also discussed.

  20. Auditory discrimination predicts linguistic outcome in Italian infants with and without familial risk for language learning impairment.

    Science.gov (United States)

    Cantiani, Chiara; Riva, Valentina; Piazza, Caterina; Bettoni, Roberta; Molteni, Massimo; Choudhury, Naseem; Marino, Cecilia; Benasich, April A

    2016-08-01

    Infants' ability to discriminate between auditory stimuli presented in rapid succession and differing in fundamental frequency (Rapid Auditory Processing [RAP] abilities) has been shown to be anomalous in infants at familial risk for Language Learning Impairment (LLI) and to predict later language outcomes. This study represents the first attempt to investigate RAP in Italian infants at risk for LLI (FH+), examining two critical acoustic features: frequency and duration, both embedded in a rapidly-presented acoustic environment. RAP skills of 24 FH+ and 32 control (FH-) Italian 6-month-old infants were characterized via EEG/ERP using a multi-feature oddball paradigm. Outcome measures of expressive vocabulary were collected at 20 months. Group differences favoring FH- infants were identified: in FH+ infants, the latency of the N2* peak was delayed and the mean amplitude of the positive mismatch response was reduced, primarily for frequency discrimination and within the right hemisphere. Moreover, both EEG measures were correlated with language scores at 20 months. Results indicate that RAP abilities are atypical in Italian infants with a first-degree relative affected by LLI and that this impacts later linguistic skills. These findings provide a compelling cross-linguistic comparison with previous research on American infants, supporting the biological unity hypothesis of LLI. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.

    Science.gov (United States)

    Necciari, Thibaud; Laback, Bernhard; Savel, Sophie; Ystad, Sølvi; Balazs, Peter; Meunier, Sabine; Kronland-Martinet, Richard

    2016-01-01

    Many audio applications perform perception-based time-frequency (TF) analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain) using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1) with standard model parameters (i.e. without efferents), (2) with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other) effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using maximally

  2. Auditory Time-Frequency Masking for Spectrally and Temporally Maximally-Compact Stimuli.

    Directory of Open Access Journals (Sweden)

    Thibaud Necciari

    Full Text Available Many audio applications perform perception-based time-frequency (TF analysis by decomposing sounds into a set of functions with good TF localization (i.e. with a small essential support in the TF domain using TF transforms and applying psychoacoustic models of auditory masking to the transform coefficients. To accurately predict masking interactions between coefficients, the TF properties of the model should match those of the transform. This involves having masking data for stimuli with good TF localization. However, little is known about TF masking for mathematically well-localized signals. Most existing masking studies used stimuli that are broad in time and/or frequency and few studies involved TF conditions. Consequently, the present study had two goals. The first was to collect TF masking data for well-localized stimuli in humans. Masker and target were 10-ms Gaussian-shaped sinusoids with a bandwidth of approximately one critical band. The overall pattern of results is qualitatively similar to existing data for long maskers. To facilitate implementation in audio processing algorithms, a dataset provides the measured TF masking function. The second goal was to assess the potential effect of auditory efferents on TF masking using a modeling approach. The temporal window model of masking was used to predict present and existing data in two configurations: (1 with standard model parameters (i.e. without efferents, (2 with cochlear gain reduction to simulate the activation of efferents. The ability of the model to predict the present data was quite good with the standard configuration but highly degraded with gain reduction. Conversely, the ability of the model to predict existing data for long maskers was better with than without gain reduction. Overall, the model predictions suggest that TF masking can be affected by efferent (or other effects that reduce cochlear gain. Such effects were avoided in the experiment of this study by using

  3. An auditory illusion of infinite tempo change based on multiple temporal levels.

    Directory of Open Access Journals (Sweden)

    Guy Madison

    Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.

  4. A Headset Method for Measuring the Visual Temporal Discrimination Threshold in Cervical Dystonia

    Directory of Open Access Journals (Sweden)

    Anna Molloy

    2014-07-01

    Full Text Available Background: The visual temporal discrimination threshold (TDT is the shortest time interval at which one can determine two stimuli to be asynchronous and meets criteria for a valid endophenotype in adult‐onset idiopathic focal dystonia, a poorly penetrant disorder. Temporal discrimination is assessed in the hospital laboratory; in unaffected relatives of multiplex adult‐onset dystonia patients distance from the hospital is a barrier to data acquisition. We devised a portable headset method for visual temporal discrimination determination and our aim was to validate this portable tool against the traditional laboratory‐based method in a group of patients and in a large cohort of healthy controls. Methods: Visual TDTs were examined in two groups 1 in 96 healthy control participants divided by age and gender, and 2 in 33 cervical dystonia patients, using two methods of data acquisition, the traditional table‐top laboratory‐based system, and the novel portable headset method. The order of assessment was randomized in the control group. The results obtained by each technique were compared. Results: Visual temporal discrimination in healthy control participants demonstrated similar age and gender effects by the headset method as found by the table‐top examination. There were no significant differences between visual TDTs obtained using the two methods, both for the control participants and for the cervical dystonia patients. Bland–Altman testing showed good concordance between the two methods in both patients and in controls.Discussion: The portable headset device is a reliable and accurate method for visual temporal discrimination testing for use outside the laboratory, and will facilitate increased TDT data collection outside of the hospital setting. This is of particular importance in multiplex families where data collection in all available members of the pedigree is important for exome sequencing studies.

  5. Effects of damage to auditory cortex on the discrimination of speech sounds by rats

    Czech Academy of Sciences Publication Activity Database

    Floody, O. R.; Ouda, Ladislav; Porter, B. A.; Kilgard, M. P.

    2010-01-01

    Roč. 101, č. 2 (2010), s. 260-268 ISSN 0031-9384 R&D Projects: GA ČR GA309/07/1336 Institutional research plan: CEZ:AV0Z50390703 Keywords : auditory cortex * brain lesions * prepulse inhibition Subject RIV: FH - Neurology Impact factor: 2.891, year: 2010

  6. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    OpenAIRE

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). The...

  7. Auditory feature perception and auditory hallucinatory experiences in schizophrenia spectrum disorder.

    Science.gov (United States)

    Schnakenberg Martin, Ashley M; Bartolomeo, Lisa; Howell, Josselyn; Hetrick, William P; Bolbecker, Amanda R; Breier, Alan; Kidd, Gary; O'Donnell, Brian F

    2017-09-21

    Schizophrenia spectrum disorder (SZ) is associated with deficits in auditory perception as well as auditory verbal hallucinations (AVH). However, the relationship between auditory feature perception and auditory verbal hallucinations (AVH), one of the most commonly occurring symptoms in psychosis, has not been well characterized. This study evaluated perception of a broad range of auditory features in SZ and determined whether current AVHs relate to auditory feature perception. Auditory perception, including frequency, intensity, duration, pulse-train and temporal order discrimination, as well as an embedded tone task, was assessed in both AVH (n = 20) and non-AVH (n = 24) SZ individuals and in healthy controls (n = 29) with the Test of Basic Auditory Capabilities (TBAC). The Hamilton Program for Schizophrenia Voices Questionnaire (HPSVQ) was used to assess the experience of auditory hallucinations in patients with SZ. Findings suggest that compared to controls, the SZ group had greater deficits on an array of auditory features, with non-AVH SZ individuals showing the most severe degree of abnormality. IQ and measures of cognitive processing were positively associated with performance on the TBAC for all SZ individuals, but not with the HPSVQ scores. These findings indicate that persons with SZ demonstrate impaired auditory perception for a broad range of features. It does not appear that impaired auditory perception is associated with recent auditory verbal hallucinations, but instead associated with the degree of intellectual impairment in SZ.

  8. The role of visual cortex acetylcholine in learning to discriminate temporally modulated visual stimuli

    Directory of Open Access Journals (Sweden)

    Victor H Minces

    2013-03-01

    Full Text Available Cholinergic neurons in the basal forebrain innervate discrete regions of the cortical mantle, bestowing the cholinergic system with the potential to dynamically modulate sub-regions of the cortex according to behavioral demands. Cortical cholinergic activity has been shown to facilitate learning and modulate attention. Experiments addressing these issues have primarily focused on widespread cholinergic depletions, extending to areas involved in general cognitive processes and sleep cycle regulation, making a definitive interpretation of the behavioral role of cholinergic projections difficult. Furthermore, a review of the electrophysiological literature suggests that cholinergic modulation is particularly important in representing the fine temporal details of stimuli, an issue rarely addressed in behavioral experimentation. The goal of this work is to understand the role cholinergic projections, specific to the sensory cortex, in learning to discriminate fine differences in the temporal structure of stimuli. A novel visual Go/No-Go task was developed to assess the ability of rats to learn and discriminate fine differences in the temporal structure of visual stimuli (lights flashing at various frequencies. The cholinergic contribution to this task was examined by selectively eliminating acetylcholine projections to visual cortex (using 192 IgG-saporin, either before or after discrimination training.We find that in the face of compromised cholinergic input to the visual cortex, the rats’ ability to learn to perform fine discriminations is impaired, whereas their ability to perform discriminations remains unaffected.These results suggest that acetylcholine serves the role of facilitating plastic changes in the sensory cortices that are needed for an animal to refine their sensitivity to the temporal characteristics of relevant stimuli.

  9. Effects of Hand Proximity and Movement Direction in Spatial and Temporal Gap Discrimination

    Science.gov (United States)

    Wiemers, Michael; Fischer, Martin H.

    2016-01-01

    Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect). During active movements there is also superior visual performance when moving toward compared to away from the stimulus (direction effect). The “modulated visual pathways” hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli et al., 2012). Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.’s (2012) result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012), temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space. PMID:28018268

  10. Effects of hand proximity and movement direction in spatial and temporal gap discrimination

    Directory of Open Access Journals (Sweden)

    Michael Wiemers

    2016-12-01

    Full Text Available Previous research on the interplay between static manual postures and visual attention revealed enhanced visual selection near the hands (near-hand effect. During active movements there is also superior visual performance when moving towards compared to away from the stimulus (direction effect. The modulated visual pathways hypothesis argues that differential involvement of magno- and parvocellular visual processing streams causes the near-hand effect. The key finding supporting this hypothesis is an increase in temporal and a reduction in spatial processing in near-hand space (Gozli, West, & Pratt, 2012. Since this hypothesis has, so far, only been tested with static hand postures, we provide a conceptual replication of Gozli et al.’s result with moving hands, thus also probing the generality of the direction effect. Participants performed temporal or spatial gap discriminations while their right hand was moving below the display. In contrast to Gozli et al. (2012, temporal gap discrimination was superior at intermediate and not near hand proximity. In spatial gap discrimination, a direction effect without hand proximity effect suggests that pragmatic attentional maps overshadowed temporal/spatial processing biases for far/near-hand space.

  11. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: an ERP study

    NARCIS (Netherlands)

    van Zuijen, T.L.; Plakas, A.; Maassen, B.A.M.; Been, P.; Maurits, N.M.; Krikhaar, E.; van Driel, J.; van der Leij, A.

    2012-01-01

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or

  12. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency : An ERP study

    NARCIS (Netherlands)

    van Zuijen, Titia L.; Plakas, Anna; Maassen, Ben A. M.; Been, Pieter; Maurits, Natasha M.; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-01-01

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or

  13. Temporal auditory processing at 17 months of age is associated with preliterate language comprehension and later word reading fluency: An ERP study

    NARCIS (Netherlands)

    Van Zuijen, Titia L.; Plakas, Anna; Maassen, Ben A M; Been, Pieter; Maurits, Natasha M.; Krikhaar, Evelien; van Driel, Joram; van der Leij, Aryan

    2012-01-01

    Dyslexia is heritable and associated with auditory processing deficits. We investigate whether temporal auditory processing is compromised in young children at-risk for dyslexia and whether it is associated with later language and reading skills. We recorded EEG from 17 months-old children with or

  14. The relation between auditory-nerve temporal responses and perceptual rate integration in cochlear implants.

    Science.gov (United States)

    Hughes, Michelle L; Baudhuin, Jacquelyn L; Goehring, Jenny L

    2014-10-01

    The purpose of this study was to examine auditory-nerve temporal response properties and their relation to psychophysical threshold for electrical pulse trains of varying rates ("rate integration"). The primary hypothesis was that better rate integration (steeper slope) would be correlated with smaller decrements in ECAP amplitude as a function of stimulation rate (shallower slope of the amplitude-rate function), reflecting a larger percentage of the neural population contributing more synchronously to each pulse in the train. Data were obtained for 26 ears in 23 cochlear-implant recipients. Electrically evoked compound action potential (ECAP) amplitudes were measured in response to each of 21 pulses in a pulse train for the following rates: 900, 1200, 1800, 2400, and 3500 pps. Psychophysical thresholds were obtained using a 3-interval, forced-choice adaptive procedure for 300-ms pulse trains of the same rates as used for the ECAP measures, which formed the rate-integration function. For each electrode, the slope of the psychophysical rate-integration function was compared to the following ECAP measures: (1) slope of the function comparing average normalized ECAP amplitude across pulses versus stimulation rate ("adaptation"), (2) the rate that produced the maximum alternation depth across the pulse train, and (3) rate at which the alternating pattern ceased (stochastic rate). Results showed no significant relations between the slope of the rate-integration function and any of the ECAP measures when data were collapsed across subjects. However, group data showed that both threshold and average ECAP amplitude decreased with increased stimulus rate, and within-subject analyses showed significant positive correlations between psychophysical thresholds and mean ECAP response amplitudes across the pulse train. These data suggest that ECAP temporal response patterns are complex and further study is required to better understand the relative contributions of adaptation

  15. Spectral and Temporal Acoustic Features Modulate Response Irregularities within Primary Auditory Cortex Columns.

    Directory of Open Access Journals (Sweden)

    Andres Carrasco

    Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.

  16. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    by comparing predictions from models based on the signal-to-noise envelope power ratio, SNRenv, and the modulation transfer function, MTF. The models were evaluated in conditions of noisy speech (1) subjected to reverberation, (2) distorted by phase jitter, or (3) processed by noise reduction via spectral...... with a measure of across (audio) frequency variability at the output of the auditory preprocessing. A complex spectro-temporal modulation filterbank might therefore not be required for speech intelligibility prediction....

  17. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    OpenAIRE

    Masahiro eKawasaki; Masahiro eKawasaki; Masahiro eKawasaki; Keiichi eKitajo; Keiichi eKitajo; Yoko eYamaguchi

    2014-01-01

    In humans, theta phase (4–8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from...

  18. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  19. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  20. The Role of Inhibition in a Computational Model of an Auditory Cortical Neuron during the Encoding of Temporal Information

    Science.gov (United States)

    Bendor, Daniel

    2015-01-01

    In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex. PMID:25879843

  1. Auditory map reorganization and pitch discrimination in adult rats chronically exposed to low-level ambient noise

    Science.gov (United States)

    Zheng, Weimin

    2012-01-01

    Behavioral adaption to a changing environment is critical for an animal's survival. How well the brain can modify its functional properties based on experience essentially defines the limits of behavioral adaptation. In adult animals the extent to which experience shapes brain function has not been fully explored. Moreover, the perceptual consequences of experience-induced changes in the brains of adults remain unknown. Here we show that the tonotopic map in the primary auditory cortex of adult rats living with low-level ambient noise underwent a dramatic reorganization. Behaviorally, chronic noise-exposure impaired fine, but not coarse pitch discrimination. When tested in a noisy environment, the noise-exposed rats performed as well as in a quiet environment whereas the control rats performed poorly. This suggests that noise-exposed animals had adapted to living in a noisy environment. Behavioral pattern analyses revealed that stress or distraction engendered by the noisy background could not account for the poor performance of the control rats in a noisy environment. A reorganized auditory map may therefore have served as the neural substrate for the consistent performance of the noise-exposed rats in a noisy environment. PMID:22973201

  2. Temporal discrimination threshold: VBM evidence for an endophenotype in adult onset primary torsion dystonia.

    OpenAIRE

    REILLY, RICHARD; WHELAN, ROBERT

    2009-01-01

    PUBLISHED Familial adult-onset primary torsion dystonia is an autosomal dominant disorder with markedly reduced penetrance. Most adult-onset primary torsion dystonia patients are sporadic cases. Disordered sensory processing is found in adult-onset primary torsion dystonia patients; if also present in their unaffected relatives this abnormality may indicate non-manifesting gene carriage. Temporal discrimination thresholds (TDTs) are abnormal in adult-onset primary torsion dystonia, but the...

  3. Temporal processing dysfunction in schizophrenia as measured by time interval discrimination and tempo reproduction tasks.

    Science.gov (United States)

    Papageorgiou, Charalabos; Karanasiou, Irene S; Kapsali, Fotini; Stachtea, Xanthy; Kyprianou, Miltiades; Tsianaka, Eleni I; Karakatsanis, Nikolaos A; Rabavilas, Andreas D; Uzunoglu, Nikolaos K; Papadimitriou, George N

    2013-01-10

    Time perception deficiency has been implicated in schizophrenia; however the exact nature of this remains unclear. The present study was designed with the aim to delineate timing deficits in schizophrenia by examining performance of patients with schizophrenia and healthy volunteers in an interval discrimination test and their accuracy and precision in a pacing reproduction–replication test. The first task involved temporal discrimination of intervals, in which participants (60 patients with schizophrenia and 35 healthy controls) had to judge whether intervals were longer, shorter or equal than a standard interval. The second task required repetitive self-paced tapping to test accuracy and precision in the reproduction and replication of tempos. Patients were found to differ significantly from the controls in the psychoticism scale of EPQ, the proportion of correct responses in the interval discrimination test and the overall accuracy and precision in the reproduction and replication of sound sequences (p discriminate time intervals were associated with increased scores in the Positive and Negative Syndrome Scale (PANSS) and in the Brief Psychiatric Rating Scale (BPRS) in comparison to good responders (p gender effects and there were no differences between subgroups of patients taking different kinds or combinations of drugs. Analysis has shown that performance on timing tasks decreased with increasing psychopathology and therefore that timing dysfunctions are directly linked to the severity of the illness. Different temporal dysfunctions can be traced to different psychophysiological origins that can be explained using the Scalar Expectancy Theory (SET).

  4. The endophenotype and the phenotype: temporal discrimination and adult-onset dystonia.

    Science.gov (United States)

    Hutchinson, Michael; Kimmich, Okka; Molloy, Anna; Whelan, Robert; Molloy, Fiona; Lynch, Tim; Healy, Daniel G; Walsh, Cathal; Edwards, Mark J; Ozelius, Laurie; Reilly, Richard B; O'Riordan, Seán

    2013-11-01

    The pathogenesis and the genetic basis of adult-onset primary torsion dystonia remain poorly understood. Because of markedly reduced penetrance in this disorder, a number of endophenotypes have been proposed; many of these may be epiphenomena secondary to disease manifestation. Mediational endophenotypes represent gene expression; the study of trait (endophenotypic) rather than state (phenotypic) characteristics avoids the misattribution of secondary adaptive cerebral changes to pathogenesis. We argue that abnormal temporal discrimination is a mediational endophenotype; its use facilitates examination of the effects of age, gender, and environment on disease penetrance in adult-onset dystonia. Using abnormal temporal discrimination in unaffected first-degree relatives as a marker for gene mutation carriage may inform exome sequencing techniques in families with few affected individuals. We further hypothesize that abnormal temporal discrimination reflects dysfunction in an evolutionarily conserved subcortical-basal ganglia circuit for the detection of salient novel environmental change. The mechanisms of dysfunction in this pathway should be a focus for future research in the pathogenesis of adult-onset primary torsion dystonia. © 2013 International Parkinson and Movement Disorder Society.

  5. Visual and auditory socio-cognitive perception in unilateral temporal lobe epilepsy in children and adolescents: a prospective controlled study.

    Science.gov (United States)

    Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania

    2014-12-01

    A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re

  6. The internal auditory clock: what can evoked potentials reveal about the analysis of temporal sound patterns, and abnormal states of consciousness?

    Science.gov (United States)

    Jones, S J

    2002-09-01

    internal "clocks"? Abnormal mismatch potentials may provide a manifestation of a disordered auditory time-sense, sometimes being abolished in comatose patients while the C-potentials and similar responses to the onset of tones are preserved. Both C- and M-potentials were usually found to be preserved, however, in patients who had emerged from coma and were capable of discriminating sounds. Substantially intact responses were also recorded from three patients who were functionally in a "vegetative" state. The C- and M-potentials were once again dissociated in a group of patients with multiple sclerosis, only the mismatch potentials being found to be significantly delayed. This subclinical impairment of a memory-based process responsible for the detection of change in temporal sound patterns may be related to defects in other memory domains such as working memory.

  7. Auditory Temporal-Organization Abilities in School-Age Children with Peripheral Hearing Loss

    Science.gov (United States)

    Koravand, Amineh; Jutras, Benoit

    2013-01-01

    Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…

  8. Modified impact of emotion on temporal discrimination in a transgenic rat model of Huntington disease

    Directory of Open Access Journals (Sweden)

    Alexis eFaure

    2013-09-01

    Full Text Available Huntington’s disease (HD is characterized by triad of motor, cognitive and emotional symptoms along with neuropathology in fronto-striatal circuit and limbic system including amygdala. Emotional alterations, which have a negative impact on patient well-being, represent some of the earliest symptoms of HD and might be related to the onset of the neurodegenerative process. In the transgenic rat model (tgHD rats, evidence suggest emotional alterations at the symptomatic stage along with neuropathology of the central nucleus of amygdala (CE. Studies in humans and animals demonstrate that emotion can modulate time perception. The impact of emotion on time perception has never been tested in HD, nor is it known if that impact could be part of the presymptomatic emotional phenotype of the pathology. The aim of this paper was to characterize the effect of emotion on temporal discrimination in presymptomatic tgHD animals. In the first experiment, we characterized the acute effect of an emotion (fear conditioned stimulus on temporal discrimination using a bisection procedure, and tested its dependency upon an intact central amygdala. The second experiment was aimed at comparing presymptomatic homozygous transgenic animals at 7-months of age and their wild-type littermates (WT in their performance on the modulation of temporal discrimination by emotion. Our principal findings show that (1 a fear cue produces a short-lived decrease of temporal precision after its termination, and (2 animals with medial CE lesion and presymptomatic tgHD animals demonstrate an alteration of this emotion-evoked temporal distortion. The results contribute to our knowledge about the presymptomatic phenotype of this HD rat model, showing susceptibility to emotion that may be related to dysfunction of the central nucleus of amygdala.

  9. Age-related deficits in auditory temporal processing: unique contributions of neural dyssynchrony and slowed neuronal processing.

    Science.gov (United States)

    Harris, Kelly C; Dubno, Judy R

    2017-05-01

    This study was guided by the hypothesis that the aging central nervous system progressively loses its ability to process rapid acoustic changes that are important for speech recognition. Specifically, we hypothesized that age-related deficits in neural synchrony and neuronal oscillatory activity occur independently in older adults and disrupt auditory temporal processing. Neural synchrony is largely dependent on phase locking within the central auditory pathway, beginning at the auditory nerve. In contrast, the resonance characteristics of oscillatory activity are dependent on the integrity and structure of long range cortical connections. We tested our hypotheses by assessing age-related differences in electrophysiologic correlates of neural synchrony and peak oscillatory frequency in younger and older adults with normal hearing and determining their associations with a behavioral measure of gap detection. Phase-locking values were smaller (poorer neural synchrony) and peak alpha frequency was lower for older than younger adults and decreased as gap detection thresholds increased; variations in phase-locking values and peak alpha frequency uniquely predicted gap detection thresholds. These effects were driven, in large part, by associations in older adults. These results reveal dissociable neural mechanisms associated with distinct underlying pathology that may differentially be present in older adults and contribute to auditory processing declines. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Distinct Temporal Coordination of Spontaneous Population Activity between Basal Forebrain and Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Josue G. Yague

    2017-09-01

    Full Text Available The basal forebrain (BF has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms inter-spike intervals (ISIs and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.

  11. Effects of lengthening the speech signal on auditory word discrimination in kindergartners with SLI

    NARCIS (Netherlands)

    Segers, P.C.J.; Verhoeven, L.T.W.

    2005-01-01

    In the present study, it was investigated whether kindergartners with specific language impairment (SLI) and normallanguage achieving (NLA) kindergartners can benefit from slowing down the entire speech signal or part of the speech signal in a synthetic speech discrimination task. Subjects were 19

  12. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  13. Improving the efficiency of multisensory integration in older adults: audio-visual temporal discrimination training reduces susceptibility to the sound-induced flash illusion.

    Science.gov (United States)

    Setti, Annalisa; Stapleton, John; Leahy, Daniel; Walsh, Cathal; Kenny, Rose Anne; Newell, Fiona N

    2014-08-01

    From language to motor control, efficient integration of information from different sensory modalities is necessary for maintaining a coherent interaction with the environment. While a number of training studies have focused on training perceptual and cognitive function, only very few are specifically targeted at improving multisensory processing. Discrimination of temporal order or coincidence is a criterion used by the brain to determine whether cross-modal stimuli should be integrated or not. In this study we trained older adults to judge the temporal order of visual and auditory stimuli. We then tested whether the training had an effect in reducing susceptibility to a multisensory illusion, the sound induced flash illusion. Improvement in the temporal order judgement task was associated with a reduction in susceptibility to the illusion, particularly at longer Stimulus Onset Asynchronies, in line with a more efficient multisensory processing profile. The present findings set the ground for more broad training programs aimed at improving older adults׳ cognitive performance in domains in which efficient temporal integration across the senses is required. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Late Maturation of Auditory Perceptual Learning

    Science.gov (United States)

    Huyck, Julia Jones; Wright, Beverly A.

    2011-01-01

    Adults can improve their performance on many perceptual tasks with training, but when does the response to training become mature? To investigate this question, we trained 11-year-olds, 14-year-olds and adults on a basic auditory task (temporal-interval discrimination) using a multiple-session training regimen known to be effective for adults. The…

  15. Mismatch negativity (MMN) to spatial deviants and behavioral spatial discrimination ability in the etiology of auditory verbal hallucinations and thought disorder in schizophrenia.

    Science.gov (United States)

    Perrin, Megan A; Kantrowitz, Joshua T; Silipo, Gail; Dias, Elisa; Jabado, Omar; Javitt, Daniel C

    2018-01-01

    Persistent auditory verbal hallucinations (AVH) in schizophrenia are increasingly tied to dysfunction at the level of auditory cortex. AVH may reflect in part misattribution of internally generated thoughts to external spatial locations. Here, we investigated the association between persistent AVH and spatial localization abilities assessed both behaviorally and by mismatch negativity (MMN) to location deviants. Spatial- and tonal- discrimination abilities were assessed in patients (n=20) and controls (n=20) using free-field tones. MMN was assessed to spatial-location-, pitch- and duration-deviants. AVH and thought disorder were assessed using clinical evaluation. As predicted, patients showed significant reductions in behavioral spatial-discrimination (pability, along with impaired MMN generation to location (pspatial discrimination, especially to right-hemifield stimuli (p=0.013), but did not correlate significantly with MMN or tone matching deficits. These findings demonstrate a significant relationship between auditory cortical spatial localization abilities and AVH susceptibility, with relatively preserved function of left vs. right auditory cortex predisposing to more severe AVH, and support models that attribute persistent AVH to impaired source-monitoring. The findings suggest new approaches for therapeutic intervention for both AVH and thought disorder in schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Age and education adjusted normative data and discriminative validity for Rey's Auditory Verbal Learning Test in the elderly Greek population.

    Science.gov (United States)

    Messinis, Lambros; Nasios, Grigorios; Mougias, Antonios; Politis, Antonis; Zampakis, Petros; Tsiamaki, Eirini; Malefaki, Sonia; Gourzis, Phillipos; Papathanasopoulos, Panagiotis

    2016-01-01

    Rey's Auditory Verbal Learning Test (RAVLT) is a widely used neuropsychological test to assess episodic memory. In the present study we sought to establish normative and discriminative validity data for the RAVLT in the elderly population using previously adapted learning lists for the Greek adult population. We administered the test to 258 cognitively healthy elderly participants, aged 60-89 years, and two patient groups (192 with amnestic mild cognitive impairment, aMCI, and 65 with Alzheimer's disease, AD). From the statistical analyses, we found that age and education contributed significantly to most trials of the RAVLT, whereas the influence of gender was not significant. Younger elderly participants with higher education outperformed the older elderly with lower education levels. Moreover, both clinical groups performed significantly worse on most RAVLT trials and composite measures than matched cognitively healthy controls. Furthermore, the AD group performed more poorly than the aMCI group on most RAVLT variables. Receiver operating characteristic (ROC) analysis was used to examine the utility of the RAVLT trials to discriminate cognitively healthy controls from aMCI and AD patients. Area under the curve (AUC), an index of effect size, showed that most of the RAVLT measures (individual and composite) included in this study adequately differentiated between the performance of healthy elders and aMCI/AD patients. We also provide cutoff scores in discriminating cognitively healthy controls from aMCI and AD patients, based on the sensitivity and specificity of the prescribed scores. Moreover, we present age- and education-specific normative data for individual and composite scores for the Greek adapted RAVLT in elderly subjects aged between 60 and 89 years for use in clinical and research settings.

  17. A Novel Functional Magnetic Resonance Imaging Paradigm for the Preoperative Assessment of Auditory Perception in a Musician Undergoing Temporal Lobe Surgery.

    Science.gov (United States)

    Hale, Matthew D; Zaman, Arshad; Morrall, Matthew C H J; Chumas, Paul; Maguire, Melissa J

    2018-03-01

    Presurgical evaluation for temporal lobe epilepsy routinely assesses speech and memory lateralization and anatomic localization of the motor and visual areas but not baseline musical processing. This is paramount in a musician. Although validated tools exist to assess musical ability, there are no reported functional magnetic resonance imaging (fMRI) paradigms to assess musical processing. We examined the utility of a novel fMRI paradigm in an 18-year-old left-handed pianist who underwent surgery for a left temporal low-grade ganglioglioma. Preoperative evaluation consisted of neuropsychological evaluation, T1-weighted and T2-weighted magnetic resonance imaging, and fMRI. Auditory blood oxygen level-dependent fMRI was performed using a dedicated auditory scanning sequence. Three separate auditory investigations were conducted: listening to, humming, and thinking about a musical piece. All auditory fMRI paradigms activated the primary auditory cortex with varying degrees of auditory lateralization. Thinking about the piece additionally activated the primary visual cortices (bilaterally) and right dorsolateral prefrontal cortex. Humming demonstrated left-sided predominance of auditory cortex activation with activity observed in close proximity to the tumor. This study demonstrated an fMRI paradigm for evaluating musical processing that could form part of preoperative assessment for patients undergoing temporal lobe surgery for epilepsy. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Evidence for Neural Computations of Temporal Coherence in an Auditory Scene and Their Enhancement during Active Listening.

    Science.gov (United States)

    O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C

    2015-05-06

    The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.

  19. The role of auditory temporal cues in the fluency of stuttering adults

    OpenAIRE

    Furini, Juliana; Picoloto, Luana Altran; Marconato, Eduarda; Bohnen, Anelise Junqueira; Cardoso, Ana Claudia Vieira; Oliveira, Cristiane Moço Canhetti de

    2017-01-01

    ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF). Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG), and 15 without stuttering (Control Group - CG). The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds dela...

  20. Temporal integration of loudness, loudness discrimination, and the form of the loudness function

    DEFF Research Database (Denmark)

    Buus, Søren; Florentine, Mary; Poulsen, Torben

    1997-01-01

    for loudness level andtemporal integration for loudness. Results for four listeners show that the amount of temporal integration, defined as the level difference between equally loud short and long tones, varies markedly with level and is largest at moderate levels. The effect of level increases...... as the duration of the short stimulus decreases and is largest for comparisons between the 2- and the 250-ms tones. The loudness-level jnds are also largest at moderate levels and, contrary to traditional jnds for the level of two dual-duration tones, they do not appear to depend on duration. The latter finding...... indicates that loudness discrimination between stimuli that differ along multiple dimensions is not the same as level discrimination between stimuli that differ only in level. An equal-loudness-ratio model, which assumes that the ratio of loudnesses for a long and a short tone at equal SPL is the same...

  1. Spectro-temporal analysis of complex tones: two cortical processes dependent on retention of sounds in the long auditory store.

    Science.gov (United States)

    Jones, S J; Vaz Pato, M; Sprague, L

    2000-09-01

    To examine whether two cortical processes concerned with spectro-temporal analysis of complex tones, a 'C-process' generating CN1 and CP2 potentials at cf. 100 and 180 ms after sudden change of pitch or timbre, and an 'M-process' generating MN1 and MP2 potentials of similar latency at the sudden cessation of repeated changes, are dependent on accumulation of a sound image in the long auditory store. The durations of steady (440 Hz) and rapidly oscillating (440-494 Hz, 16 changes/s) pitch of a synthesized 'clarinet' tone were reciprocally varied between 0.5 and 4.5 s within a duty cycle of 5 s. Potentials were recorded at the beginning and end of the period of oscillation in 10 non-attending normal subjects. The CN1 at the beginning of pitch oscillation and the MN1 at the end were both strongly influenced by the duration of the immediately preceding stimulus pattern, mean amplitudes being 3-4 times larger after 4.5 s as compared with 0.5 s. The processes responsible for both CN1 and MN1 are influenced by the duration of the preceding sound pattern over a period comparable to that of the 'echoic memory' or long auditory store. The store therefore appears to occupy a key position in spectro-temporal sound analysis. The C-process is concerned with the spectral structure of complex sounds, and may therefore reflect the 'grouping' of frequency components underlying auditory stream segregation. The M-process (mismatch negativity) is concerned with the temporal sound structure, and may play an important role in the extraction of information from sequential sounds.

  2. Effects of significance of auditory location changes on event related brain potentials and pitch discrimination performance.

    Science.gov (United States)

    Koistinen, Sonja; Rinne, Teemu; Cederström, Sebastian; Alho, Kimmo

    2012-01-03

    We examined effects of significance of task irrelevant changes in the location of tones on the mismatch negativity (MMN) and P3a event related brain potentials. The participants were to discriminate between two frequency modulated tones differing from each other in the direction of frequency glide. Each tone was delivered through one of five loudspeakers in front of the participant. On most trials, a tone was presented from the same location as the preceding tone, but occasionally the location changed. In the Varying Location Condition, these changes, although irrelevant with regard to pitch discrimination, were still significant for performance as the following tones were presented from the new location where attention had to be therefore shifted. In the Fixed Location Condition, the location changes were less significant as the tones following a location change were presented from the original location. In both conditions, the location changes were associated with decreased hit rates and increased reaction times in the pitch discrimination task. However, the hit rate decrease was larger in the Fixed Location Condition suggesting that in this condition the location changes were just distractors. MMN and P3a responses were elicited by location changes in both conditions. In the Fixed Location Condition, a P3a was also elicited by the first tone following a location change at the original location while the MMN was not. Thus, the P3a appeared to be related to shifting of attention in space and was not tightly coupled with MMN elicitation. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Risk of depression enhances auditory Pitch discrimination in the brain as indexed by the mismatch negativity.

    Science.gov (United States)

    Bonetti, L; Haumann, N T; Vuust, P; Kliuchko, M; Brattico, E

    2017-10-01

    Depression is a state of aversion to activity and low mood that affects behaviour, thoughts, feelings and sense of well-being. Moreover, the individual depression trait is associated with altered auditory cortex activation and appraisal of the affective content of sounds. Mismatch negativity responses (MMNs) to acoustic feature changes (pitch, timbre, location, intensity, slide and rhythm) inserted in a musical sequence played in major or minor mode were recorded using magnetoencephalography (MEG) in 88 subclinical participants with depression risk. We found correlations between MMNs to slide and pitch and the level of depression risk reported by participants, indicating that higher MMNs correspond to higher risk of depression. Furthermore we found significantly higher MMN amplitudes to mistuned pitches within a major context compared to MMNs to pitch changes in a minor context. The brains of individuals with depression risk are more responsive to mistuned and fast pitch stimulus changes, even at a pre-attentive level. Considering the altered appraisal of affective contents of sounds in depression and the relevance of spectral pitch features for those contents in music and speech, we propose that individuals with subclinical depression risk are more tuned to tracking sudden pitch changes. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  4. Low-level neural auditory discrimination dysfunctions in specific language impairment—A review on mismatch negativity findings

    Directory of Open Access Journals (Sweden)

    Teija Kujala

    2017-12-01

    Full Text Available In specific language impairment (SLI, there is a delay in the child’s oral language skills when compared with nonverbal cognitive abilities. The problems typically relate to phonological and morphological processing and word learning. This article reviews studies which have used mismatch negativity (MMN in investigating low-level neural auditory dysfunctions in this disorder. With MMN, it is possible to tap the accuracy of neural sound discrimination and sensory memory functions. These studies have found smaller response amplitudes and longer latencies for speech and non-speech sound changes in children with SLI than in typically developing children, suggesting impaired and slow auditory discrimination in SLI. Furthermore, they suggest shortened sensory memory duration and vulnerability of the sensory memory to masking effects. Importantly, some studies reported associations between MMN parameters and language test measures. In addition, it was found that language intervention can influence the abnormal MMN in children with SLI, enhancing its amplitude. These results suggest that the MMN can shed light on the neural basis of various auditory and memory impairments in SLI, which are likely to influence speech perception. Keywords: Specific language impairment, Auditory processing, Mismatch negativity (MMN

  5. Nerve canals at the fundus of the internal auditory canal on high-resolution temporal bone CT

    International Nuclear Information System (INIS)

    Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul

    2001-01-01

    To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of

  6. Deciphering auditory processing disorders in children.

    Science.gov (United States)

    Chermak, Gail D

    2002-08-01

    APD is not a label for a unitary disease entity but rather a description of functional deficits [3]. It is a complex and heterogeneous group of auditory-specific disorders usually associated with a range of listening and learning deficits [3,4]. Underlying APD is a deficit observed in one or more of the auditory processes responsible for generating the auditory evoked potentials and the following behaviors: around localization and lateralization; auditory discrimination; auditory pattern recognition; temporal aspects of audition, including temporal resolution, masking, integration, and ordering; auditory performance with competing acoustic signals; and auditory performance with degraded acoustic signals [2]. Comprehensive assessment is necessary for the accurate differential diagnosis of APD from other "look-alike" disorders, most notably ADHD and language processing disorders. Speech-language pathologists, psychologists, educators, and physicians contribute to this more comprehensive assessment. The primary role of otolaryngologists is to evaluate and treat peripheral hearing disorders, such as otitis media. Children with APDs may present to an otolaryngologist, thus requiring the physician to make appropriate referral for assessment and intervention. Currently, diagnosis of APD is based on the outcomes of behavioral tests, supplemented by electroacoustic measures and, to a lesser extent, by electrophysiologic measures [1]. Intervention for APD focuses on improving the quality of the acoustic signal and the listening environment, improving auditory skills, and enhancing utilization of metacognitive and language resources [2]. Additional controlled case studies and single-subject and group research designs are needed to ascertain systematically the relative efficacy of various treatment and management approaches.

  7. Theta oscillation and neuronal activity in rat hippocampus areinvolved in temporal discrimination of time in seconds

    Directory of Open Access Journals (Sweden)

    Tomoaki eNakazono

    2015-06-01

    Full Text Available The discovery of time cells revealed that the rodent hippocampus has information of time.Previous studies have suggested that a role of hippocampal time cells is to integratetemporally segregated events into a sequence using working memory with time perception.However, it is unclear that hippocampal cells contribute to time perception itself becausemost previous studies employed delayed matching-to-sample tasks that did not evaluatetime perception separately from working memory processes. Here, we investigated thefunction of the rat hippocampus in time perception using a temporal discrimination task. Inthe task, rats had to discriminate between durations of 1 and 3 sec to get a reward, andmaintaining task-related information as working memory was not required. We found thatsome hippocampal neurons showed firing rate modulation similar to that of time cells.Moreover, theta oscillation of local field potentials (LFPs showed a transient enhancementof power during time discrimination periods. However, there were little relationshipsbetween the neuronal activities and theta oscillations. These results suggest that both theindividual neuronal activities and theta oscillations of LFPs in the hippocampus have a possibility to be engaged in seconds order time perception; however, they participate in different ways.

  8. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  9. ERPs reveal the temporal dynamics of auditory word recognition in specific language impairment.

    Science.gov (United States)

    Malins, Jeffrey G; Desroches, Amy S; Robertson, Erin K; Newman, Randy Lynn; Archibald, Lisa M D; Joanisse, Marc F

    2013-07-01

    We used event-related potentials (ERPs) to compare auditory word recognition in children with specific language impairment (SLI group; N=14) to a group of typically developing children (TD group; N=14). Subjects were presented with pictures of items and heard auditory words that either matched or mismatched the pictures. Mismatches overlapped expected words in word-onset (cohort mismatches; see: DOLL, hear: dog), rhyme (CONE -bone), or were unrelated (SHELL -mug). In match trials, the SLI group showed a different pattern of N100 responses to auditory stimuli compared to the TD group, indicative of early auditory processing differences in SLI. However, the phonological mapping negativity (PMN) response to mismatching items was comparable across groups, suggesting that just like TD children, children with SLI are capable of establishing phonological expectations and detecting violations of these expectations in an online fashion. Perhaps most importantly, we observed a lack of attenuation of the N400 for rhyming words in the SLI group, which suggests that either these children were not as sensitive to rhyme similarity as their typically developing peers, or did not suppress lexical alternatives to the same extent. These findings help shed light on the underlying deficits responsible for SLI. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users.

    Science.gov (United States)

    Fu, Qian-Jie; Chinchilla, Sherol; Galvin, John J

    2004-09-01

    The present study investigated the relative importance of temporal and spectral cues in voice gender discrimination and vowel recognition by normal-hearing subjects listening to an acoustic simulation of cochlear implant speech processing and by cochlear implant users. In the simulation, the number of speech processing channels ranged from 4 to 32, thereby varying the spectral resolution; the cutoff frequencies of the channels' envelope filters ranged from 20 to 320 Hz, thereby manipulating the available temporal cues. For normal-hearing subjects, results showed that both voice gender discrimination and vowel recognition scores improved as the number of spectral channels was increased. When only 4 spectral channels were available, voice gender discrimination significantly improved as the envelope filter cutoff frequency was increased from 20 to 320 Hz. For all spectral conditions, increasing the amount of temporal information had no significant effect on vowel recognition. Both voice gender discrimination and vowel recognition scores were highly variable among implant users. The performance of cochlear implant listeners was similar to that of normal-hearing subjects listening to comparable speech processing (4-8 spectral channels). The results suggest that both spectral and temporal cues contribute to voice gender discrimination and that temporal cues are especially important for cochlear implant users to identify the voice gender when there is reduced spectral resolution.

  11. Processamento temporal, localização e fechamento auditivo em portadores de perda auditiva unilateral Temporal processing, localization and auditory closure in individuals with unilateral hearing loss

    Directory of Open Access Journals (Sweden)

    Regiane Nishihata

    2012-01-01

    , sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.

  12. Auditory properties in the parabelt regions of the superior temporal gyrus in the awake macaque monkey: an initial survey.

    Science.gov (United States)

    Kajikawa, Yoshinao; Frey, Stephen; Ross, Deborah; Falchier, Arnaud; Hackett, Troy A; Schroeder, Charles E

    2015-03-11

    The superior temporal gyrus (STG) is on the inferior-lateral brain surface near the external ear. In macaques, 2/3 of the STG is occupied by an auditory cortical region, the "parabelt," which is part of a network of inferior temporal areas subserving communication and social cognition as well as object recognition and other functions. However, due to its location beneath the squamous temporal bone and temporalis muscle, the STG, like other inferior temporal regions, has been a challenging target for physiological studies in awake-behaving macaques. We designed a new procedure for implanting recording chambers to provide direct access to the STG, allowing us to evaluate neuronal properties and their topography across the full extent of the STG in awake-behaving macaques. Initial surveys of the STG have yielded several new findings. Unexpectedly, STG sites in monkeys that were listening passively responded to tones with magnitudes comparable to those of responses to 1/3 octave band-pass noise. Mapping results showed longer response latencies in more rostral sites and possible tonotopic patterns parallel to core and belt areas, suggesting the reversal of gradients between caudal and rostral parabelt areas. These results will help further exploration of parabelt areas. Copyright © 2015 the authors 0270-6474/15/354140-11$15.00/0.

  13. Asymmetry of temporal auditory T-complex: right ear-left hemisphere advantage in Tb timing in children.

    Science.gov (United States)

    Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie

    2015-02-01

    To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.

  14. Echoic Memory: Investigation of Its Temporal Resolution by Auditory Offset Cortical Responses

    OpenAIRE

    Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke

    2014-01-01

    Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temp...

  15. Monkey׳s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices.

    Science.gov (United States)

    Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C

    2016-06-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  16. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    2010-07-01

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  17. The role of auditory temporal cues in the fluency of stuttering adults

    Directory of Open Access Journals (Sweden)

    Juliana Furini

    Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.

  18. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  19. A Comparative Study of Feature Selection Methods for the Discriminative Analysis of Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Chunren Lai

    2017-12-01

    Full Text Available It is crucial to differentiate patients with temporal lobe epilepsy (TLE from the healthy population and determine abnormal brain regions in TLE. The cortical features and changes can reveal the unique anatomical patterns of brain regions from structural magnetic resonance (MR images. In this study, structural MR images from 41 patients with left TLE, 34 patients with right TLE, and 58 normal controls (NC were acquired, and four kinds of cortical measures, namely cortical thickness, cortical surface area, gray matter volume (GMV, and mean curvature, were explored for discriminative analysis. Three feature selection methods including the independent sample t-test filtering, the sparse-constrained dimensionality reduction model (SCDRM, and the support vector machine-recursive feature elimination (SVM-RFE were investigated to extract dominant features among the compared groups for classification using the support vector machine (SVM classifier. The results showed that the SVM-RFE achieved the highest performance (most classifications with more than 84% accuracy, followed by the SCDRM, and the t-test. Especially, the surface area and GMV exhibited prominent discriminative ability, and the performance of the SVM was improved significantly when the four cortical measures were combined. Additionally, the dominant regions with higher classification weights were mainly located in the temporal and the frontal lobe, including the entorhinal cortex, rostral middle frontal, parahippocampal cortex, superior frontal, insula, and cuneus. This study concluded that the cortical features provided effective information for the recognition of abnormal anatomical patterns and the proposed methods had the potential to improve the clinical diagnosis of TLE.

  20. Stimulus repetition and the perception of time: the effects of prior exposure on temporal discrimination, judgment, and production.

    Directory of Open Access Journals (Sweden)

    William J Matthews

    Full Text Available It has been suggested that repeated stimuli have shorter subjective duration than novel items, perhaps because of a reduction in the neural response to repeated presentations of the same object. Five experiments investigated the effects of repetition on time perception and found further evidence that immediate repetition reduces apparent duration, consistent with the idea that subjective duration is partly based on neural coding efficiency. In addition, the experiments found (a no effect of repetition on the precision of temporal discrimination, (b that the effects of repetition disappeared when there was a modest lag between presentations, (c that, across participants, the size of the repetition effect correlated with temporal discrimination, and (d that the effects of repetition suggested by a temporal production task were the opposite of those suggested by temporal judgments. The theoretical and practical implications of these results are discussed.

  1. Reorganization in processing of spectral and temporal input in the rat posterior auditory field induced by environmental enrichment

    Science.gov (United States)

    Jakkamsetti, Vikram; Chang, Kevin Q.

    2012-01-01

    Environmental enrichment induces powerful changes in the adult cerebral cortex. Studies in primary sensory cortex have observed that environmental enrichment modulates neuronal response strength, selectivity, speed of response, and synchronization to rapid sensory input. Other reports suggest that nonprimary sensory fields are more plastic than primary sensory cortex. The consequences of environmental enrichment on information processing in nonprimary sensory cortex have yet to be studied. Here we examine physiological effects of enrichment in the posterior auditory field (PAF), a field distinguished from primary auditory cortex (A1) by wider receptive fields, slower response times, and a greater preference for slowly modulated sounds. Environmental enrichment induced a significant increase in spectral and temporal selectivity in PAF. PAF neurons exhibited narrower receptive fields and responded significantly faster and for a briefer period to sounds after enrichment. Enrichment increased time-locking to rapidly successive sensory input in PAF neurons. Compared with previous enrichment studies in A1, we observe a greater magnitude of reorganization in PAF after environmental enrichment. Along with other reports observing greater reorganization in nonprimary sensory cortex, our results in PAF suggest that nonprimary fields might have a greater capacity for reorganization compared with primary fields. PMID:22131375

  2. The effect of a concurrent working memory task and temporal offsets on the integration of auditory and visual speech information.

    Science.gov (United States)

    Buchan, Julie N; Munhall, Kevin G

    2012-01-01

    Audiovisual speech perception is an everyday occurrence of multisensory integration. Conflicting visual speech information can influence the perception of acoustic speech (namely the McGurk effect), and auditory and visual speech are integrated over a rather wide range of temporal offsets. This research examined whether the addition of a concurrent cognitive load task would affect the audiovisual integration in a McGurk speech task and whether the cognitive load task would cause more interference at increasing offsets. The amount of integration was measured by the proportion of responses in incongruent trials that did not correspond to the audio (McGurk response). An eye-tracker was also used to examine whether the amount of temporal offset and the presence of a concurrent cognitive load task would influence gaze behavior. Results from this experiment show a very modest but statistically significant decrease in the number of McGurk responses when subjects also perform a cognitive load task, and that this effect is relatively constant across the various temporal offsets. Participant's gaze behavior was also influenced by the addition of a cognitive load task. Gaze was less centralized on the face, less time was spent looking at the mouth and more time was spent looking at the eyes, when a concurrent cognitive load task was added to the speech task.

  3. Discriminant features and temporal structure of nonmanuals in American Sign Language.

    Directory of Open Access Journals (Sweden)

    C Fabian Benitez-Quiroz

    Full Text Available To fully define the grammar of American Sign Language (ASL, a linguistic model of its nonmanuals needs to be constructed. While significant progress has been made to understand the features defining ASL manuals, after years of research, much still needs to be done to uncover the discriminant nonmanual components. The major barrier to achieving this goal is the difficulty in correlating facial features and linguistic features, especially since these correlations may be temporally defined. For example, a facial feature (e.g., head moves down occurring at the end of the movement of another facial feature (e.g., brows moves up, may specify a Hypothetical conditional, but only if this time relationship is maintained. In other instances, the single occurrence of a movement (e.g., brows move up can be indicative of the same grammatical construction. In the present paper, we introduce a linguistic-computational approach to efficiently carry out this analysis. First, a linguistic model of the face is used to manually annotate a very large set of 2,347 videos of ASL nonmanuals (including tens of thousands of frames. Second, a computational approach is used to determine which features of the linguistic model are more informative of the grammatical rules under study. We used the proposed approach to study five types of sentences--Hypothetical conditionals, Yes/no questions, Wh-questions, Wh-questions postposed, and Assertions--plus their polarities--positive and negative. Our results verify several components of the standard model of ASL nonmanuals and, most importantly, identify several previously unreported features and their temporal relationship. Notably, our results uncovered a complex interaction between head position and mouth shape. These findings define some temporal structures of ASL nonmanuals not previously detected by other approaches.

  4. Behavioral determination of stimulus pair discrimination of auditory acoustic and electrical stimuli using a classical conditioning and heart-rate approach.

    Science.gov (United States)

    Morgan, Simeon J; Paolini, Antonio G

    2012-06-06

    Acute animal preparations have been used in research prospectively investigating electrode designs and stimulation techniques for integration into neural auditory prostheses, such as auditory brainstem implants and auditory midbrain implants. While acute experiments can give initial insight to the effectiveness of the implant, testing the chronically implanted and awake animals provides the advantage of examining the psychophysical properties of the sensations induced using implanted devices. Several techniques such as reward-based operant conditioning, conditioned avoidance, or classical fear conditioning have been used to provide behavioral confirmation of detection of a relevant stimulus attribute. Selection of a technique involves balancing aspects including time efficiency (often poor in reward-based approaches), the ability to test a plurality of stimulus attributes simultaneously (limited in conditioned avoidance), and measure reliability of repeated stimuli (a potential constraint when physiological measures are employed). Here, a classical fear conditioning behavioral method is presented which may be used to simultaneously test both detection of a stimulus, and discrimination between two stimuli. Heart-rate is used as a measure of fear response, which reduces or eliminates the requirement for time-consuming video coding for freeze behaviour or other such measures (although such measures could be included to provide convergent evidence). Animals were conditioned using these techniques in three 2-hour conditioning sessions, each providing 48 stimulus trials. Subsequent 48-trial testing sessions were then used to test for detection of each stimulus in presented pairs, and test discrimination between the member stimuli of each pair. This behavioral method is presented in the context of its utilisation in auditory prosthetic research. The implantation of electrocardiogram telemetry devices is shown. Subsequent implantation of brain electrodes into the Cochlear

  5. Temporal Lobe Lesions and Perception of Species-Specific Vocalizations by Macaques

    Science.gov (United States)

    Heffner, Henry E.; Heffner, Rickye S.

    1984-10-01

    Japanese macaques were trained to discriminate two forms of their coo vocalization before and after unilateral and bilateral ablation of the temporal cortex. Unilateral ablation of the left superior temporal gyrus, including auditory cortex, resulted in an initial impairment in the discrimination, but similar unilateral ablation of the right superior temporal gyrus had no effect. Bilateral temporal lesions including auditory cortex completely abolished the ability of the animals to discriminate their coos. Neither unilateral nor bilateral ablation of cortex dorsal to and sparing the auditory cortex had any effect on the discrimination. The perception of species-specific vocalizations by Japanese macaques seems to be mediated by the temporal cortex, with the left hemisphere playing a predominant role.

  6. Monkey’s short-term auditory memory nearly abolished by combined removal of the rostral superior temporal gyrus and rhinal cortices

    Science.gov (United States)

    Fritz, Jonathan B.; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C.

    2016-01-01

    While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30–40 seconds to a duration of ~1–2 seconds, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. PMID:26707975

  7. Age-group differences in speech identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition

    Science.gov (United States)

    Füllgrabe, Christian; Moore, Brian C. J.; Stone, Michael A.

    2015-01-01

    Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60–79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125–6 kHz were matched to nine young (YNH; 18–27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5–180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in

  8. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... Noisy, loosely structured classrooms could be very frustrating. Auditory memory problems: This is when a child has difficulty remembering information such as directions, lists, or study materials. It can ... later"). Auditory discrimination problems: This is when a child has ...

  9. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  10. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  11. The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke.

    Science.gov (United States)

    Leff, Alexander P; Schofield, Thomas M; Crinion, Jennifer T; Seghier, Mohamed L; Grogan, Alice; Green, David W; Price, Cathy J

    2009-12-01

    Competing theories of short-term memory function make specific predictions about the functional anatomy of auditory short-term memory and its role in language comprehension. We analysed high-resolution structural magnetic resonance images from 210 stroke patients and employed a novel voxel based analysis to test the relationship between auditory short-term memory and speech comprehension. Using digit span as an index of auditory short-term memory capacity we found that the structural integrity of a posterior region of the superior temporal gyrus and sulcus predicted auditory short-term memory capacity, even when performance on a range of other measures was factored out. We show that the integrity of this region also predicts the ability to comprehend spoken sentences. Our results therefore support cognitive models that posit a shared substrate between auditory short-term memory capacity and speech comprehension ability. The method applied here will be particularly useful for modelling structure-function relationships within other complex cognitive domains.

  12. Repeated measurements of cerebral blood flow in the left superior temporal gyrus reveal tonic hyperactivity in patients with auditory verbal hallucinations: A possible trait marker

    Directory of Open Access Journals (Sweden)

    Philipp eHoman

    2013-06-01

    Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.

  13. Auditory verbal hallucinations are related to cortical thinning in the left middle temporal gyrus of patients with schizophrenia.

    Science.gov (United States)

    Cui, Y; Liu, B; Song, M; Lipnicki, D M; Li, J; Xie, S; Chen, Y; Li, P; Lu, L; Lv, L; Wang, H; Yan, H; Yan, J; Zhang, H; Zhang, D; Jiang, T

    2018-01-01

    Auditory verbal hallucinations (AVHs) are one of the most common and severe symptoms of schizophrenia, but the neuroanatomical abnormalities underlying AVHs are not well understood. The present study aims to investigate whether AVHs are associated with cortical thinning. Participants were schizophrenia patients from four centers across China, 115 with AVHs and 93 without AVHs, as well as 261 healthy controls. All received 3 T T1-weighted brain scans, and whole brain vertex-wise cortical thickness was compared across groups. Correlations between AVH severity and cortical thickness were also determined. The left middle part of the middle temporal gyrus (MTG) was significantly thinner in schizophrenia patients with AVHs than in patients without AVHs and healthy controls. Inferences were made using a false discovery rate approach with a threshold at p < 0.05. Left MTG thickness did not differ between patients without AVHs and controls. These results were replicated by a meta-analysis showing them to be consistent across the four centers. Cortical thickness of the left MTG was also found to be inversely correlated with hallucination severity across all schizophrenia patients. The results of this multi-center study suggest that an abnormally thin left MTG could be involved in the pathogenesis of AVHs in schizophrenia.

  14. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    International Nuclear Information System (INIS)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Pallier, C.; Oppenheim, C.; Rizzi, L.; Dehaene, S.

    2009-01-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  15. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  16. Visual and auditory perception in preschool children at risk for dyslexia.

    Science.gov (United States)

    Ortiz, Rosario; Estévez, Adelina; Muñetón, Mercedes; Domínguez, Carolina

    2014-11-01

    Recently, there has been renewed interest in perceptive problems of dyslexics. A polemic research issue in this area has been the nature of the perception deficit. Another issue is the causal role of this deficit in dyslexia. Most studies have been carried out in adult and child literates; consequently, the observed deficits may be the result rather than the cause of dyslexia. This study addresses these issues by examining visual and auditory perception in children at risk for dyslexia. We compared children from preschool with and without risk for dyslexia in auditory and visual temporal order judgment tasks and same-different discrimination tasks. Identical visual and auditory, linguistic and nonlinguistic stimuli were presented in both tasks. The results revealed that the visual as well as the auditory perception of children at risk for dyslexia is impaired. The comparison between groups in auditory and visual perception shows that the achievement of children at risk was lower than children without risk for dyslexia in the temporal tasks. There were no differences between groups in auditory discrimination tasks. The difficulties of children at risk in visual and auditory perceptive processing affected both linguistic and nonlinguistic stimuli. Our conclusions are that children at risk for dyslexia show auditory and visual perceptive deficits for linguistic and nonlinguistic stimuli. The auditory impairment may be explained by temporal processing problems and these problems are more serious for processing language than for processing other auditory stimuli. These visual and auditory perceptive deficits are not the consequence of failing to learn to read, thus, these findings support the theory of temporal processing deficit. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  18. Temporal discrimination threshold: VBM evidence for an endophenotype in adult onset primary torsion dystonia.

    Science.gov (United States)

    Bradley, D; Whelan, R; Walsh, R; Reilly, R B; Hutchinson, S; Molloy, F; Hutchinson, M

    2009-09-01

    Familial adult-onset primary torsion dystonia is an autosomal dominant disorder with markedly reduced penetrance. Most adult-onset primary torsion dystonia patients are sporadic cases. Disordered sensory processing is found in adult-onset primary torsion dystonia patients; if also present in their unaffected relatives this abnormality may indicate non-manifesting gene carriage. Temporal discrimination thresholds (TDTs) are abnormal in adult-onset primary torsion dystonia, but their utility as a possible endophenotype has not been examined. We examined 35 adult-onset primary torsion dystonia patients (17 familial, 18 sporadic), 42 unaffected first-degree relatives of both familial and sporadic adult-onset primary torsion dystonia patients, 32 unaffected second-degree relatives of familial adult-onset primary torsion dystonia (AOPTD) patients and 43 control subjects. TDT was measured using visual and tactile stimuli. In 33 unaffected relatives, voxel-based morphometry was used to compare putaminal volumes between relatives with abnormal and normal TDTs. The mean TDT in 26 control subjects under 50 years of age was 22.85 ms (SD 8.00; 95% CI: 19.62-26.09 ms). The mean TDT in 17 control subjects over 50 years was 30.87 ms (SD 5.48; 95% CI: 28.05-33.69 ms). The upper limit of normal, defined as control mean + 2.5 SD, was 42.86 ms in the under 50 years group and 44.58 ms in the over 50 years group. Thirty out of thirty-five (86%) AOPTD patients had abnormal TDTs with similar frequencies of abnormalities in sporadic and familial patients. Twenty-two out of forty-two (52%) unaffected first-degree relatives had abnormal TDTs with similar frequencies in relatives of sporadic and familial AOPTD patients. Abnormal TDTs were found in 16/32 (50%) of second-degree relatives. Voxel-based morphometry analysis comparing 13 unaffected relatives with abnormal TDTs and 20 with normal TDTs demonstrated a bilateral increase in putaminal grey matter in unaffected relatives with abnormal

  19. Temporal discrimination threshold: VBM evidence for an endophenotype in adult onset primary torsion dystonia.

    LENUS (Irish Health Repository)

    Bradley, D

    2012-02-01

    Familial adult-onset primary torsion dystonia is an autosomal dominant disorder with markedly reduced penetrance. Most adult-onset primary torsion dystonia patients are sporadic cases. Disordered sensory processing is found in adult-onset primary torsion dystonia patients; if also present in their unaffected relatives this abnormality may indicate non-manifesting gene carriage. Temporal discrimination thresholds (TDTs) are abnormal in adult-onset primary torsion dystonia, but their utility as a possible endophenotype has not been examined. We examined 35 adult-onset primary torsion dystonia patients (17 familial, 18 sporadic), 42 unaffected first-degree relatives of both familial and sporadic adult-onset primary torsion dystonia patients, 32 unaffected second-degree relatives of familial adult-onset primary torsion dystonia (AOPTD) patients and 43 control subjects. TDT was measured using visual and tactile stimuli. In 33 unaffected relatives, voxel-based morphometry was used to compare putaminal volumes between relatives with abnormal and normal TDTs. The mean TDT in 26 control subjects under 50 years of age was 22.85 ms (SD 8.00; 95% CI: 19.62-26.09 ms). The mean TDT in 17 control subjects over 50 years was 30.87 ms (SD 5.48; 95% CI: 28.05-33.69 ms). The upper limit of normal, defined as control mean + 2.5 SD, was 42.86 ms in the under 50 years group and 44.58 ms in the over 50 years group. Thirty out of thirty-five (86%) AOPTD patients had abnormal TDTs with similar frequencies of abnormalities in sporadic and familial patients. Twenty-two out of forty-two (52%) unaffected first-degree relatives had abnormal TDTs with similar frequencies in relatives of sporadic and familial AOPTD patients. Abnormal TDTs were found in 16\\/32 (50%) of second-degree relatives. Voxel-based morphometry analysis comparing 13 unaffected relatives with abnormal TDTs and 20 with normal TDTs demonstrated a bilateral increase in putaminal grey matter in unaffected relatives with abnormal

  20. Screening LGI1 in a cohort of 26 lateral temporal lobe epilepsy patients with auditory aura from Turkey detects a novel de novo mutation.

    Science.gov (United States)

    Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul

    2016-02-01

    Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Audio-visual temporal perception in children with restored hearing.

    Science.gov (United States)

    Gori, Monica; Chilosi, Anna; Forli, Francesca; Burr, David

    2017-05-01

    It is not clear how audio-visual temporal perception develops in children with restored hearing. In this study we measured temporal discrimination thresholds with an audio-visual temporal bisection task in 9 deaf children with restored audition, and 22 typically hearing children. In typically hearing children, audition was more precise than vision, with no gain in multisensory conditions (as previously reported in Gori et al. (2012b)). However, deaf children with restored audition showed similar thresholds for audio and visual thresholds and some evidence of gain in audio-visual temporal multisensory conditions. Interestingly, we found a strong correlation between auditory weighting of multisensory signals and quality of language: patients who gave more weight to audition had better language skills. Similarly, auditory thresholds for the temporal bisection task were also a good predictor of language skills. This result supports the idea that the temporal auditory processing is associated with language development. Copyright © 2017. Published by Elsevier Ltd.

  2. Spectro-temporal modulation sensitivity and discrimination in normal hearing and hearing -impaired listeners

    DEFF Research Database (Denmark)

    Sanchez Lopez, Raul; Fereczkowski, Michal; Santurette, Sébastien

    -temporal modulation transfer functions (Dau et al. 1997, Eddins & Bero 2007, Chi et al. 1999). Recently, Mehraei et al. (2014) showed significant differences between normal-hearing and hearing-impaired (HI) listeners in spectro-temporal modulation (STM) detection and also the relation between STM sensitivity......When a signal varies in its properties along the time and frequency, this is considered a modulation. Speech signals exhibit temporal and spectral modulations. The sensitivity to these modulations has been studied in normal-hearing (NH) listeners, yielding temporal, spectral and spectro...

  3. Auditory-model based assessment of the effects of hearing loss and hearing-aid compression on spectral and temporal resolution

    DEFF Research Database (Denmark)

    Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf

    2016-01-01

    . However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means......Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....

  4. Evaluation of temporal bone pneumatization on high resolution CT (HRCT) measurements of the temporal bone in normal and otitis media group and their correlation to measurements of internal auditory meatus, vestibular or cochlear aqueduct

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Miyako

    1988-07-01

    High resolution CT axial scans were made at the three levels of the temoral bone 91 cases. These cases consisted of 109 sides of normal pneumatization (NR group) and 73 of poor pneumatization resulted by chronic otitis (OM group). NR group included sensorineural hearing loss cases and/or sudden deafness on the side. Three levels of continuous slicing were chosen at the internal auditory meatus, the vestibular and the cochlear aqueduct, respectively. In each slice two sagittal and two horizontal measurements were done on the outer contour of the temporal bone. At the proper level, diameter as well as length of the internal acoustic meatus, the vestibular or the cochlear aqueduct were measured. Measurements of the temporal bone showed statistically significant difference between NR and OM groups. Correlation of both diameter and length of the internal auditory meatus to the temporal bone measurements were statistically significant. Neither of measurements on the vestibular or the cochlear aqueduct showed any significant correlation to that of the temporal bone.

  5. The effects of incidentally learned temporal and spatial predictability on response times and visual fixations during target detection and discrimination.

    Science.gov (United States)

    Beck, Melissa R; Hong, S Lee; van Lamsweerde, Amanda E; Ericson, Justin M

    2014-01-01

    Responses are quicker to predictable stimuli than if the time and place of appearance is uncertain. Studies that manipulate target predictability often involve overt cues to speed up response times. However, less is known about whether individuals will exhibit faster response times when target predictability is embedded within the inter-trial relationships. The current research examined the combined effects of spatial and temporal target predictability on reaction time (RT) and allocation of overt attention in a sustained attention task. Participants responded as quickly as possible to stimuli while their RT and eye movements were measured. Target temporal and spatial predictability were manipulated by altering the number of: 1) different time intervals between a response and the next target; and 2) possible spatial locations of the target. The effects of target predictability on target detection (Experiment 1) and target discrimination (Experiment 2) were tested. For both experiments, shorter RTs as target predictability increased across both space and time were found. In addition, the influences of spatial and temporal target predictability on RT and the overt allocation of attention were task dependent; suggesting that effective orienting of attention relies on both spatial and temporal predictability. These results indicate that stimulus predictability can be increased without overt cues and detected purely through inter-trial relationships over the course of repeated stimulus presentations.

  6. The effects of incidentally learned temporal and spatial predictability on response times and visual fixations during target detection and discrimination.

    Directory of Open Access Journals (Sweden)

    Melissa R Beck

    Full Text Available Responses are quicker to predictable stimuli than if the time and place of appearance is uncertain. Studies that manipulate target predictability often involve overt cues to speed up response times. However, less is known about whether individuals will exhibit faster response times when target predictability is embedded within the inter-trial relationships. The current research examined the combined effects of spatial and temporal target predictability on reaction time (RT and allocation of overt attention in a sustained attention task. Participants responded as quickly as possible to stimuli while their RT and eye movements were measured. Target temporal and spatial predictability were manipulated by altering the number of: 1 different time intervals between a response and the next target; and 2 possible spatial locations of the target. The effects of target predictability on target detection (Experiment 1 and target discrimination (Experiment 2 were tested. For both experiments, shorter RTs as target predictability increased across both space and time were found. In addition, the influences of spatial and temporal target predictability on RT and the overt allocation of attention were task dependent; suggesting that effective orienting of attention relies on both spatial and temporal predictability. These results indicate that stimulus predictability can be increased without overt cues and detected purely through inter-trial relationships over the course of repeated stimulus presentations.

  7. Functional connectivity in the dorsal stream and between bilateral auditory-related cortical areas differentially contribute to speech decoding depending on spectro-temporal signal integrity and performance.

    Science.gov (United States)

    Elmer, Stefan; Kühnis, Jürg; Rauch, Piyush; Abolfazl Valizadeh, Seyed; Jäncke, Lutz

    2017-11-01

    Speech processing relies on the interdependence between auditory perception, sensorimotor integration, and verbal memory functions. Functional and structural connectivity between bilateral auditory-related cortical areas (ARCAs) facilitates spectro-temporal analyses, whereas the dynamic interplay between ARCAs and Broca's area (i.e., dorsal pathway) contributes to verbal memory functions, articulation, and sound-to-motor mapping. However, it remains unclear whether these two neural circuits are preferentially driven by spectral or temporal acoustic information, and whether their recruitment is predictive of speech perception performance and learning. Therefore, we evaluated EEG-based intracranial (eLORETA) functional connectivity (lagged coherence) in both pathways (i.e., between bilateral ARCAs and in the dorsal stream) while good- (GPs, N = 12) and poor performers (PPs, N = 13) learned to decode natural pseudowords (CLEAN) or comparable items (speech-noise chimeras) manipulated in the envelope (ENV) or in the fine-structure (FS). Learning to decode degraded speech was generally associated with increased functional connectivity in the theta, alpha, and beta frequency range in both circuits. Furthermore, GPs exhibited increased connectivity in the left dorsal stream compared to PPs, but only during the FS condition and in the theta frequency band. These results suggest that both pathways contribute to the decoding of spectro-temporal degraded speech by increasing the communication between brain regions involved in perceptual analyses and verbal memory functions. Otherwise, the left-hemispheric recruitment of the dorsal stream in GPs during the FS condition points to a contribution of this pathway to articulatory-based memory processes that are dependent on the temporal integrity of the speech signal. These results enable to better comprehend the neural circuits underlying word-learning as a function of temporal and spectral signal integrity and performance

  8. Collective synchronization of self/non-self discrimination in T cell activation, across multiple spatio-temporal scales

    Science.gov (United States)

    Altan-Bonnet, Gregoire

    The immune system is a collection of cells whose function is to eradicate pathogenic infections and malignant tumors while protecting healthy tissues. Recent work has delineated key molecular and cellular mechanisms associated with the ability to discriminate self from non-self agents. For example, structural studies have quantified the biophysical characteristics of antigenic molecules (those prone to trigger lymphocyte activation and a subsequent immune response). However, such molecular mechanisms were found to be highly unreliable at the individual cellular level. We will present recent efforts to build experimentally validated computational models of the immune responses at the collective cell level. Such models have become critical to delineate how higher-level integration through nonlinear amplification in signal transduction, dynamic feedback in lymphocyte differentiation and cell-to-cell communication allows the immune system to enforce reliable self/non-self discrimination at the organism level. In particular, we will present recent results demonstrating how T cells tune their antigen discrimination according to cytokine cues, and how competition for cytokine within polyclonal populations of cells shape the repertoire of responding clones. Additionally, we will present recent theoretical and experimental results demonstrating how competition between diffusion and consumption of cytokines determine the range of cell-cell communications within lymphoid organs. Finally, we will discuss how biochemically explicit models, combined with quantitative experimental validation, unravel the relevance of new feedbacks for immune regulations across multiple spatial and temporal scales.

  9. A deafening flash! Visual interference of auditory signal detection.

    Science.gov (United States)

    Fassnidge, Christopher; Cecconi Marcotti, Claudia; Freeman, Elliot

    2017-03-01

    In some people, visual stimulation evokes auditory sensations. How prevalent and how perceptually real is this? 22% of our neurotypical adult participants responded 'Yes' when asked whether they heard faint sounds accompanying flash stimuli, and showed significantly better ability to discriminate visual 'Morse-code' sequences. This benefit might arise from an ability to recode visual signals as sounds, thus taking advantage of superior temporal acuity of audition. In support of this, those who showed better visual relative to auditory sequence discrimination also had poorer auditory detection in the presence of uninformative visual flashes, though this was independent of awareness of visually-evoked sounds. Thus a visually-evoked auditory representation may occur subliminally and disrupt detection of real auditory signals. The frequent natural correlation between visual and auditory stimuli might explain the surprising prevalence of this phenomenon. Overall, our results suggest that learned correspondences between strongly correlated modalities may provide a precursor for some synaesthetic abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. The Effect of Delayed Auditory Feedback on Activity in the Temporal Lobe while Speaking: A Positron Emission Tomography Study

    Science.gov (United States)

    Takaso, Hideki; Eisner, Frank; Wise, Richard J. S.; Scott, Sophie K.

    2010-01-01

    Purpose: Delayed auditory feedback is a technique that can improve fluency in stutterers, while disrupting fluency in many nonstuttering individuals. The aim of this study was to determine the neural basis for the detection of and compensation for such a delay, and the effects of increases in the delay duration. Method: Positron emission…

  11. Temporal code in the vibrissal system-Part II: Roughness surface discrimination

    Energy Technology Data Exchange (ETDEWEB)

    Farfan, F D [Departamento de BioingenierIa, FACET, Universidad Nacional de Tucuman, INSIBIO - CONICET, CC 327, Postal Code CP 4000 (Argentina); AlbarracIn, A L [Catedra de Neurociencias, Facultad de Medicina, Universidad Nacional de Tucuman (Argentina); Felice, C J [Departamento de BioingenierIa, FACET, Universidad Nacional de Tucuman, INSIBIO - CONICET, CC 327, Postal Code CP 4000 (Argentina)

    2007-11-15

    Previous works have purposed hypotheses about the neural code of the tactile system in the rat. One of them is based on the physical characteristics of vibrissae, such as frequency of resonance; another is based on discharge patterns on the trigeminal ganglion. In this work, the purpose is to find a temporal code analyzing the afferent signals of two vibrissal nerves while vibrissae sweep surfaces of different roughness. Two levels of pressure were used between the vibrissa and the contact surface. We analyzed the afferent discharge of DELTA and GAMMA vibrissal nerves. The vibrissae movements were produced using electrical stimulation of the facial nerve. The afferent signals were analyzed using an event detection algorithm based on Continuous Wavelet Transform (CWT). The algorithm was able to detect events of different duration. The inter-event times detected were calculated for each situation and represented in box plot. This work allowed establishing the existence of a temporal code at peripheral level.

  12. The Effects of Static and Dynamic Visual Representations as Aids for Primary School Children in Tasks of Auditory Discrimination of Sound Patterns. An Intervention-based Study.

    Directory of Open Access Journals (Sweden)

    Jesus Tejada

    2018-02-01

    Full Text Available It has been proposed that non-conventional presentations of visual information could be very useful as a scaffolding strategy in the learning of Western music notation. As a result, this study has attempted to determine if there is any effect of static and dynamic presentation modes of visual information in the recognition of sound patterns. An intervention-based quasi-experimental design was adopted with two groups of fifth-grade students in a Spanish city. Students did tasks involving discrimination, auditory recognition and symbolic association of the sound patterns with non-musical representations, either static images (S group, or dynamic images (D group. The results showed neither statistically significant differences in the scores of D and S, nor influence of the covariates on the dependent variable, although statistically significant intra-group differences were found for both groups. This suggests that both types of graphic formats could be effective as digital learning mediators in the learning of Western musical notation.

  13. Temporal discrimination thresholds in adult-onset primary torsion dystonia: an analysis by task type and by dystonia phenotype.

    LENUS (Irish Health Repository)

    Bradley, D

    2012-01-01

    Adult-onset primary torsion dystonia (AOPTD) is an autosomal dominant disorder with markedly reduced penetrance. Sensory abnormalities are present in AOPTD and also in unaffected relatives, possibly indicating non-manifesting gene carriage (acting as an endophenotype). The temporal discrimination threshold (TDT) is the shortest time interval at which two stimuli are detected to be asynchronous. We aimed to compare the sensitivity and specificity of three different TDT tasks (visual, tactile and mixed\\/visual-tactile). We also aimed to examine the sensitivity of TDTs in different AOPTD phenotypes. To examine tasks, we tested TDT in 41 patients and 51 controls using visual (2 lights), tactile (non-painful electrical stimulation) and mixed (1 light, 1 electrical) stimuli. To investigate phenotypes, we examined 71 AOPTD patients (37 cervical dystonia, 14 writer\\'s cramp, 9 blepharospasm, 11 spasmodic dysphonia) and 8 musician\\'s dystonia patients. The upper limit of normal was defined as control mean +2.5 SD. In dystonia patients, the visual task detected abnormalities in 35\\/41 (85%), the tactile task in 35\\/41 (85%) and the mixed task in 26\\/41 (63%); the mixed task was less sensitive than the other two (p = 0.04). Specificity was 100% for the visual and tactile tasks. Abnormal TDTs were found in 36 of 37 (97.3%) cervical dystonia, 12 of 14 (85.7%) writer\\'s cramp, 8 of 9 (88.8%) blepharospasm, 10 of 11 (90.1%) spasmodic dysphonia patients and 5 of 8 (62.5%) musicians. The visual and tactile tasks were found to be more sensitive than the mixed task. Temporal discrimination threshold results were comparable across common adult-onset primary torsion dystonia phenotypes, with lower sensitivity in the musicians.

  14. Magnetic resonance imaging of anterior temporal lobe cysts in children: discriminating special imaging features in a particular group of diseases

    International Nuclear Information System (INIS)

    Hoffmann Nunes, Renato; Torres Pacheco, Felipe; Rocha, Antonio Jose da

    2014-01-01

    We hypothesized that disorders with anterior temporal lobe (ATL) cysts might exhibit common peculiarities and distinguishable imaging features that could be useful for diagnosis. We reviewed a series of patients for neuroimaging contributions to specific diagnoses. A literature search was conducted, and institutional imaging files were reviewed to identify MR examinations with ATL cysts in children. Patients were divided according to head size, calcifications, white matter and cortical abnormalities. Unsupervised hierarchical clustering of patients on the basis of their MR and CT items was performed. We identified 23 patients in our database in whom MR revealed ATL cysts. Our series included five patients with congenital muscular dystrophy (05/23 = 21.7 %), six with megalencephalic leukoencephalopathy with subcortical cysts (06/23 = 26.1 %), three with non-megalencephalic leukoencephalopathy with subcortical cysts (03/23 = 13.1 %), seven with congenital cytomegalovirus disease (07/23 = 30.4 %) and two with Aicardi-Goutieres syndrome (02/23 = 8.7 %). After analysis, 11 clusters resulted in the highest discriminative indices. Thereafter, patients' clusters were linked to their underlying diseases. The features that best discriminated between clusters included brainstem abnormalities, cerebral calcifications and some peculiar grey and white matter abnormalities. A flow chart was drafted to guide the radiologist in these diagnoses. The authors encourage the combined interpretation of these features in the herein proposed approach that confidently predicted the final diagnosis in this particular group of disorders associated with ATL cysts. (orig.)

  15. Magnetic resonance imaging of anterior temporal lobe cysts in children: discriminating special imaging features in a particular group of diseases

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmann Nunes, Renato; Torres Pacheco, Felipe; Rocha, Antonio Jose da [Fleury Medicina e Saude, Division of Neuroradiology, Sao Paulo (Brazil); Servico de Diagnostico por Imagem, Division of Neuroradiology, Santa Casa de Misericordia de Sao Paulo Paulo, Sao Paulo (Brazil)

    2014-07-15

    We hypothesized that disorders with anterior temporal lobe (ATL) cysts might exhibit common peculiarities and distinguishable imaging features that could be useful for diagnosis. We reviewed a series of patients for neuroimaging contributions to specific diagnoses. A literature search was conducted, and institutional imaging files were reviewed to identify MR examinations with ATL cysts in children. Patients were divided according to head size, calcifications, white matter and cortical abnormalities. Unsupervised hierarchical clustering of patients on the basis of their MR and CT items was performed. We identified 23 patients in our database in whom MR revealed ATL cysts. Our series included five patients with congenital muscular dystrophy (05/23 = 21.7 %), six with megalencephalic leukoencephalopathy with subcortical cysts (06/23 = 26.1 %), three with non-megalencephalic leukoencephalopathy with subcortical cysts (03/23 = 13.1 %), seven with congenital cytomegalovirus disease (07/23 = 30.4 %) and two with Aicardi-Goutieres syndrome (02/23 = 8.7 %). After analysis, 11 clusters resulted in the highest discriminative indices. Thereafter, patients' clusters were linked to their underlying diseases. The features that best discriminated between clusters included brainstem abnormalities, cerebral calcifications and some peculiar grey and white matter abnormalities. A flow chart was drafted to guide the radiologist in these diagnoses. The authors encourage the combined interpretation of these features in the herein proposed approach that confidently predicted the final diagnosis in this particular group of disorders associated with ATL cysts. (orig.)

  16. Emergence of an abstract categorical code enabling the discrimination of temporally structured tactile stimuli.

    Science.gov (United States)

    Rossi-Pool, Román; Salinas, Emilio; Zainos, Antonio; Alvarez, Manuel; Vergara, José; Parga, Néstor; Romo, Ranulfo

    2016-12-06

    The problem of neural coding in perceptual decision making revolves around two fundamental questions: (i) How are the neural representations of sensory stimuli related to perception, and (ii) what attributes of these neural responses are relevant for downstream networks, and how do they influence decision making? We studied these two questions by recording neurons in primary somatosensory (S1) and dorsal premotor (DPC) cortex while trained monkeys reported whether the temporal pattern structure of two sequential vibrotactile stimuli (of equal mean frequency) was the same or different. We found that S1 neurons coded the temporal patterns in a literal way and only during the stimulation periods and did not reflect the monkeys' decisions. In contrast, DPC neurons coded the stimulus patterns as broader categories and signaled them during the working memory, comparison, and decision periods. These results show that the initial sensory representation is transformed into an intermediate, more abstract categorical code that combines past and present information to ultimately generate a perceptually informed choice.

  17. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  18. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    ) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...... discrimination to that of NH listeners. In the second part of this work, behavioral and objective measures of pitch discrimination were carried out in musicians and non-musicians. Musicians showed an increased pitch-discrimination performance relative to non-musicians for both resolved and unresolved harmonics...... for the individual pitch-discrimination abilities, the musically trained listeners still allocated lower processing effort than did the non-musicians to perform the task at the same performance level. This finding suggests an enhanced pitch representation along the auditory system in musicians, possibly as a result...

  19. High baseline activity in inferior temporal cortex improves neural and behavioral discriminability during visual categorization

    Directory of Open Access Journals (Sweden)

    Nazli eEmadi

    2014-11-01

    Full Text Available Spontaneous firing is a ubiquitous property of neural activity in the brain. Recent literature suggests that this baseline activity plays a key role in perception. However, it is not known how the baseline activity contributes to neural coding and behavior. Here, by recording from the single neurons in the inferior temporal cortex of monkeys performing a visual categorization task, we thoroughly explored the relationship between baseline activity, the evoked response, and behavior. Specifically we found that a low-frequency (< 8 Hz oscillation in the spike train, prior and phase-locked to the stimulus onset, was correlated with increased gamma power and neuronal baseline activity. This enhancement of the baseline activity was then followed by an increase in the neural selectivity and the response reliability and eventually a higher behavioral performance.

  20. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  1. Comparison of capacity for diagnosis and visuality of auditory ossicles at different scanning angles in the computed tomography of temporal bone

    International Nuclear Information System (INIS)

    Ogura, Akio; Nakayama, Yoshiki

    1992-01-01

    Computed tomographic (CT) scanning has made significant contributions to the diagnosis and evaluation of temporal bone lesions by the thin-section, high-resolution techniques. However, these techniques involve greater radiation exposure to the lens of patients. A mean was thus sought for reducing the radiation exposure at different scanning angles such as +15 degrees and -10 degrees to the Reid's base line. Purposes of this study were to measure radiation exposure to the lens using the two tomographic planes and to compare the ability to visualize auditory ossicles and labyrinthine structures. Visual evaluation of tomographic images on auditory ossicles was made by blinded methods using four rankings by six radiologists. The statistical significance of the intergroup difference in the visualization of tomographic planes was assessed for a significance level of 0.01. Thermoluminescent dosimeter chips were placed on the cornea of tissue equivalent to the skull phantom to evaluate radiation exposure for two separate tomographic planes. As the result, tomographic plane at an angle of -10 degrees to Reid's base line allowed better visualization than the other plane for the malleus, incus, facial nerve canal, and tuba auditiva (p<0.01). Scannings at an angle of -10 degrees to Reid's base line reduced radiation exposure to approximately one-fiftieth (1/50) that with the scans at the other angle. (author)

  2. The role of auditory cortices in the retrieval of single-trial auditory-visual object memories.

    Science.gov (United States)

    Matusz, Pawel J; Thelen, Antonia; Amrein, Sarah; Geiser, Eveline; Anken, Jacques; Murray, Micah M

    2015-03-01

    Single-trial encounters with multisensory stimuli affect both memory performance and early-latency brain responses to visual stimuli. Whether and how auditory cortices support memory processes based on single-trial multisensory learning is unknown and may differ qualitatively and quantitatively from comparable processes within visual cortices due to purported differences in memory capacities across the senses. We recorded event-related potentials (ERPs) as healthy adults (n = 18) performed a continuous recognition task in the auditory modality, discriminating initial (new) from repeated (old) sounds of environmental objects. Initial presentations were either unisensory or multisensory; the latter entailed synchronous presentation of a semantically congruent or a meaningless image. Repeated presentations were exclusively auditory, thus differing only according to the context in which the sound was initially encountered. Discrimination abilities (indexed by d') were increased for repeated sounds that were initially encountered with a semantically congruent image versus sounds initially encountered with either a meaningless or no image. Analyses of ERPs within an electrical neuroimaging framework revealed that early stages of auditory processing of repeated sounds were affected by prior single-trial multisensory contexts. These effects followed from significantly reduced activity within a distributed network, including the right superior temporal cortex, suggesting an inverse relationship between brain activity and behavioural outcome on this task. The present findings demonstrate how auditory cortices contribute to long-term effects of multisensory experiences on auditory object discrimination. We propose a new framework for the efficacy of multisensory processes to impact both current multisensory stimulus processing and unisensory discrimination abilities later in time. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  3. Predators or prey? Spatio-temporal discrimination of human-derived risk by brown bears.

    Science.gov (United States)

    Ordiz, Andrés; Støen, Ole-Gunnar; Delibes, Miguel; Swenson, Jon E

    2011-05-01

    Prey usually adjust anti-predator behavior to subtle variations in perceived risk. However, it is not clear whether adult large carnivores that are virtually free of natural predation adjust their behavior to subtle variations in human-derived risk, even when living in human-dominated landscapes. As a model, we studied resting-site selection by a large carnivore, the brown bear (Ursus arctos), under different spatial and temporal levels of human activity. We quantified horizontal and canopy cover at 440 bear beds and 439 random sites at different distances from human settlements, seasons, and times of the day. We hypothesized that beds would be more concealed than random sites and that beds would be more concealed in relation to human-derived risk. Although human densities in Scandinavia are the lowest within bear ranges in Western Europe, we found an effect of human activity; bears chose beds with higher horizontal and canopy cover during the day (0700-1900 hours), especially when resting closer to human settlements, than at night (2200-0600 hours). In summer/fall (the berry season), with more intensive and dispersed human activity, including hunting, bears rested further from human settlements during the day than in spring (pre-berry season). Additionally, day beds in the summer/fall were the most concealed. Large carnivores often avoid humans at a landscape scale, but total avoidance in human-dominated areas is not possible. Apparently, bears adjust their behavior to avoid human encounters, which resembles the way prey avoid their predators. Bears responded to fine-scale variations in human-derived risk, both on a seasonal and a daily basis.

  4. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  5. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    OpenAIRE

    Maria Kon

    2015-01-01

    Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the ...

  6. [Information Processing in the Auditory Ventral Stream].

    Science.gov (United States)

    Fukushima, Makoto; Ojima, Hisayuki

    2016-11-01

    The auditory cortex in humans comprises multiple auditory fields organized hierarchically, similar to that in non-human primates. The ventral auditory stream of the macaque consists of several subdivisions on the supratemporal plane (STP) and the superior temporal gyrus (STG). There are two main axes (caudorostral and mediolateral) for processing auditory information in the STP and STG. Here, we review the neural basis of the integration of spectral and temporal auditory information along the two axes of the ventral auditory stream in the macaque.

  7. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  8. Effect of 5-HT2A and 5-HT2C receptors on temporal discrimination by mice.

    Science.gov (United States)

    Halberstadt, Adam L; Sindhunata, Ivan S; Scheffers, Kees; Flynn, Aaron D; Sharp, Richard F; Geyer, Mark A; Young, Jared W

    2016-08-01

    Timing deficits are observed in patients with schizophrenia. Serotonergic hallucinogens can also alter the subjective experience of time. Characterizing the mechanism through which the serotonergic system regulates timing will increase our understanding of the linkage between serotonin (5-HT) and schizophrenia, and will provide insight into the mechanism of action of hallucinogens. We investigated whether interval timing in mice is altered by hallucinogens and other 5-HT2 receptor ligands. C57BL/6J mice were trained to perform a discrete-trials temporal discrimination task. In the discrete-trials task, mice were presented with two levers after a variable interval. Responding on lever A was reinforced if the interval was 6.5 s. A 2-parameter logistic function was fitted to the proportional choice for lever B (%B responding), yielding estimates of the indifference point (T50) and the Weber fraction (a measure of timing precision). The 5-HT2A antagonist M100907 increased T50, whereas the 5-HT2C antagonist SB-242,084 reduced T50. The results indicate that 5-HT2A and 5-HT2C receptors have countervailing effects on the speed of the internal pacemaker. The hallucinogen 2,5-dimethoxy-4-iodoamphetamine (DOI; 3 mg/kg IP), a 5-HT2 agonist, flattened the response curve at long stimulus intervals and shifted it to the right, causing both T50 and the Weber fraction to increase. The effect of DOI was antagonized by M100907 (0.03 mg/kg SC) but was unaffected by SB-242,084 (0.1 mg/kg SC). Similar to DOI, the selective 5-HT2A agonist 25CN-NBOH (6 mg/kg SC) reduced %B responding at long stimulus intervals, and increased T50 and the Weber fraction. These results demonstrate that hallucinogens alter temporal perception in mice, effects that are mediated by the 5-HT2A receptor. It appears that 5-HT regulates temporal perception, suggesting that altered serotonergic signaling may contribute to the timing deficits observed in schizophrenia and other psychiatric disorders. Copyright

  9. Revisiting the "enigma" of musicians with dyslexia: Auditory sequencing and speech abilities.

    Science.gov (United States)

    Zuk, Jennifer; Bishop-Liebler, Paula; Ozernov-Palchik, Ola; Moore, Emma; Overy, Katie; Welch, Graham; Gaab, Nadine

    2017-04-01

    Previous research has suggested a link between musical training and auditory processing skills. Musicians have shown enhanced perception of auditory features critical to both music and speech, suggesting that this link extends beyond basic auditory processing. It remains unclear to what extent musicians who also have dyslexia show these specialized abilities, considering often-observed persistent deficits that coincide with reading impairments. The present study evaluated auditory sequencing and speech discrimination in 52 adults comprised of musicians with dyslexia, nonmusicians with dyslexia, and typical musicians. An auditory sequencing task measuring perceptual acuity for tone sequences of increasing length was administered. Furthermore, subjects were asked to discriminate synthesized syllable continua varying in acoustic components of speech necessary for intraphonemic discrimination, which included spectral (formant frequency) and temporal (voice onset time [VOT] and amplitude envelope) features. Results indicate that musicians with dyslexia did not significantly differ from typical musicians and performed better than nonmusicians with dyslexia for auditory sequencing as well as discrimination of spectral and VOT cues within syllable continua. However, typical musicians demonstrated superior performance relative to both groups with dyslexia for discrimination of syllables varying in amplitude information. These findings suggest a distinct profile of speech processing abilities in musicians with dyslexia, with specific weaknesses in discerning amplitude cues within speech. Because these difficulties seem to remain persistent in adults with dyslexia despite musical training, this study only partly supports the potential for musical training to enhance the auditory processing skills known to be crucial for literacy in individuals with dyslexia. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Widespread auditory deficits in tune deafness.

    Science.gov (United States)

    Jones, Jennifer L; Zalewski, Christopher; Brewer, Carmen; Lucker, Jay; Drayna, Dennis

    2009-02-01

    than the normal control group. Approximately one-third of our participants with tune deafness displayed evidence of attention deficit with hyperactivity disorder on the Test of Variables of Attention. Test of Variables of Attention scores were significantly correlated with gap-detection scores, but not significantly correlated with any of the other experimental measures, including the DTT, DLF, and auditory pattern discrimination tests. Short- and long-term memory was not significantly related to any of the experimental measures. Individuals with tune deafness identified by the DTT have poor performance on many tests of auditory function. These include pure-tone frequency discrimination, pitch and duration pattern discrimination, and temporal resolution. Overall, reduction in performance does not seem to derive from deficits in memory or attention. However, because of the prevalence of attention deficit with hyperactivity disorder in those with tune deafness, this variable should be considered as a potentially confounding factor in future studies of tune deafness and its characteristics. Pure-tone frequency discrimination varied widely in individuals with tune deafness, and the high degree of intertrial variability suggests that frequency discrimination may be unstable in tune-deaf individuals.

  11. Perceiving temporal regularity in music: The role of auditory event-related potentials (ERPs) in probing beat perception

    NARCIS (Netherlands)

    Honing, H.; Bouwer, F.L.; Háden, G.P.; Merchant, H.; de Lafuente, V.

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in

  12. Laminar differences in response to simple and spectro-temporally complex sounds in the primary auditory cortex of ketamine-anesthetized gerbils.

    Directory of Open Access Journals (Sweden)

    Markus K Schaefer

    Full Text Available In mammals, acoustic communication plays an important role during social behaviors. Despite their ethological relevance, the mechanisms by which the auditory cortex represents different communication call properties remain elusive. Recent studies have pointed out that communication-sound encoding could be based on discharge patterns of neuronal populations. Following this idea, we investigated whether the activity of local neuronal networks, such as those occurring within individual cortical columns, is sufficient for distinguishing between sounds that differed in their spectro-temporal properties. To accomplish this aim, we analyzed simple pure-tone and complex communication call elicited multi-unit activity (MUA as well as local field potentials (LFP, and current source density (CSD waveforms at the single-layer and columnar level from the primary auditory cortex of anesthetized Mongolian gerbils. Multi-dimensional scaling analysis was used to evaluate the degree of "call-specificity" in the evoked activity. The results showed that whole laminar profiles segregated 1.8-2.6 times better across calls than single-layer activity. Also, laminar LFP and CSD profiles segregated better than MUA profiles. Significant differences between CSD profiles evoked by different sounds were more pronounced at mid and late latencies in the granular and infragranular layers and these differences were based on the absence and/or presence of current sinks and on sink timing. The stimulus-specific activity patterns observed within cortical columns suggests that the joint activity of local cortical populations (as local as single columns could indeed be important for encoding sounds that differ in their acoustic attributes.

  13. The Context-Dependency of the Experience of Auditory Succession and Prospects for Embodying Philosophical Models of Temporal Experience

    Directory of Open Access Journals (Sweden)

    Maria Kon

    2015-05-01

    Full Text Available Recent philosophical work on temporal experience offers generic models that are often assumed to apply to all sensory modalities. I show that the models serve as broad frameworks in which different aspects of cognitive science can be slotted and, thus, are beneficial to furthering research programs in embodied music cognition. Here I discuss a particular feature of temporal experience that plays a key role in such philosophical work: a distinction between the experience of succession and the mere succession of experiences. I question the presupposition that there is such an evident, clear distinction and suggest that, instead, how the distinction is drawn is context-dependent. After suggesting a way to modify the philosophical models of temporal experience to accommodate this context-dependency, I illustrate that these models can fruitfully incorporate features of research projects in embodied musical cognition. To do so I supplement a modified retentionalist model with aspects of recent work that links bodily movement with musical perception (Godøy, 2006; 2010a; Jensenius, Wanderley, Godøy, and Leman, 2010. The resulting model is shown to facilitate novel hypotheses, refine the notion of context-dependency and point towards means of extending the philosophical model and an existent research program.

  14. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  15. Discriminative spatial-frequency-temporal feature extraction and classification of motor imagery EEG: An sparse regression and Weighted Naïve Bayesian Classifier-based approach.

    Science.gov (United States)

    Miao, Minmin; Zeng, Hong; Wang, Aimin; Zhao, Changsen; Liu, Feixiang

    2017-02-15

    Common spatial pattern (CSP) is most widely used in motor imagery based brain-computer interface (BCI) systems. In conventional CSP algorithm, pairs of the eigenvectors corresponding to both extreme eigenvalues are selected to construct the optimal spatial filter. In addition, an appropriate selection of subject-specific time segments and frequency bands plays an important role in its successful application. This study proposes to optimize spatial-frequency-temporal patterns for discriminative feature extraction. Spatial optimization is implemented by channel selection and finding discriminative spatial filters adaptively on each time-frequency segment. A novel Discernibility of Feature Sets (DFS) criteria is designed for spatial filter optimization. Besides, discriminative features located in multiple time-frequency segments are selected automatically by the proposed sparse time-frequency segment common spatial pattern (STFSCSP) method which exploits sparse regression for significant features selection. Finally, a weight determined by the sparse coefficient is assigned for each selected CSP feature and we propose a Weighted Naïve Bayesian Classifier (WNBC) for classification. Experimental results on two public EEG datasets demonstrate that optimizing spatial-frequency-temporal patterns in a data-driven manner for discriminative feature extraction greatly improves the classification performance. The proposed method gives significantly better classification accuracies in comparison with several competing methods in the literature. The proposed approach is a promising candidate for future BCI systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Perceiving temporal regularity in music: the role of auditory event-related potentials (ERPs) in probing beat perception.

    Science.gov (United States)

    Honing, Henkjan; Bouwer, Fleur L; Háden, Gábor P

    2014-01-01

    The aim of this chapter is to give an overview of how the perception of a regular beat in music can be studied in humans adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). Next to a review of the recent literature on the perception of temporal regularity in music, we will discuss in how far ERPs, and especially the component called mismatch negativity (MMN), can be instrumental in probing beat perception. We conclude with a discussion on the pitfalls and prospects of using ERPs to probe the perception of a regular beat, in which we present possible constraints on stimulus design and discuss future perspectives.

  17. Auditory Discrimination of Lexical Stress Patterns in Hearing-Impaired Infants with Cochlear Implants Compared with Normal Hearing: Influence of Acoustic Cues and Listening Experience to the Ambient Language.

    Science.gov (United States)

    Segal, Osnat; Houston, Derek; Kishon-Rabin, Liat

    2016-01-01

    To assess discrimination of lexical stress pattern in infants with cochlear implant (CI) compared with infants with normal hearing (NH). While criteria for cochlear implantation have expanded to infants as young as 6 months, little is known regarding infants' processing of suprasegmental-prosodic cues which are known to be important for the first stages of language acquisition. Lexical stress is an example of such a cue, which, in hearing infants, has been shown to assist in segmenting words from fluent speech and in distinguishing between words that differ only the stress pattern. To date, however, there are no data on the ability of infants with CIs to perceive lexical stress. Such information will provide insight to the speech characteristics that are available to these infants in their first steps of language acquisition. This is of particular interest given the known limitations that the CI device has in transmitting speech information that is mediated by changes in fundamental frequency. Two groups of infants participated in this study. The first group included 20 profoundly hearing-impaired infants with CI, 12 to 33 months old, implanted under the age of 2.5 years (median age of implantation = 14.5 months), with 1 to 6 months of CI use (mean = 2.7 months) and no known additional problems. The second group of infants included 48 NH infants, 11 to 14 months old with normal development and no known risk factors for developmental delays. Infants were tested on their ability to discriminate between nonsense words that differed on their stress pattern only (/dóti/ versus /dotí/ and /dotí/ versus /dóti/) using the visual habituation procedure. The measure for discrimination was the change in looking time between the last habituation trial (e.g., /dóti/) and the novel trial (e.g., /dotí/). (1) Infants with CI showed discrimination between lexical stress pattern with only limited auditory experience with their implant device, (2) discrimination of stress

  18. Effects of harmonic roving on pitch discrimination

    DEFF Research Database (Denmark)

    Santurette, Sébastien; de Kérangal, Mathilde le Gal; Joshi, Suyash Narendra

    2015-01-01

    Performance in pitch discrimination tasks is limited by variability intrinsic to listeners which may arise from peripheral auditory coding limitations or more central noise sources. Perceptual limitations may be characterized by measuring an observer’s change in performance when introducting...... external noise in the physical stimulus (Lu and Dosher, 2008). The present study used this approach to attempt to quantify the “internal noise” involved in pitch coding of harmonic complex tones by estimating the amount of harmonic roving required to impair pitch discrimination performance. It remains...... a matter of debate whether pitch perception of natural complex sounds mostly relies on either spectral excitation-based information or temporal periodicity information. Comparing the way internal noise affects the internal representations of such information to how it affects pitch discrimination...

  19. The left superior temporal gyrus is a shared substrate for auditory short-term memory and speech comprehension: evidence from 210 patients with stroke

    OpenAIRE

    Leff, Alexander P.; Schofield, Thomas M.; Crinion, Jennifer T.; Seghier, Mohamed L.; Grogan, Alice; Green, David W.; Price, Cathy J.

    2009-01-01

    Competing theories of short-term memory function make specific predictions about the functional anatomy of auditory short-term memory and its role in language comprehension. We analysed high-resolution structural magnetic resonance images from 210 stroke patients and employed a novel voxel based analysis to test the relationship between auditory short-term memory and speech comprehension. Using digit span as an index of auditory short-term memory capacity we found that the structural integrit...

  20. Tropical land use land cover mapping in Pará (Brazil) using discriminative Markov random fields and multi-temporal TerraSAR-X data

    Science.gov (United States)

    Hagensieker, Ron; Roscher, Ribana; Rosentreter, Johannes; Jakimow, Benjamin; Waske, Björn

    2017-12-01

    Remote sensing satellite data offer the unique possibility to map land use land cover transformations by providing spatially explicit information. However, detection of short-term processes and land use patterns of high spatial-temporal variability is a challenging task. We present a novel framework using multi-temporal TerraSAR-X data and machine learning techniques, namely discriminative Markov random fields with spatio-temporal priors, and import vector machines, in order to advance the mapping of land cover characterized by short-term changes. Our study region covers a current deforestation frontier in the Brazilian state Pará with land cover dominated by primary forests, different types of pasture land and secondary vegetation, and land use dominated by short-term processes such as slash-and-burn activities. The data set comprises multi-temporal TerraSAR-X imagery acquired over the course of the 2014 dry season, as well as optical data (RapidEye, Landsat) for reference. Results show that land use land cover is reliably mapped, resulting in spatially adjusted overall accuracies of up to 79% in a five class setting, yet limitations for the differentiation of different pasture types remain. The proposed method is applicable on multi-temporal data sets, and constitutes a feasible approach to map land use land cover in regions that are affected by high-frequent temporal changes.

  1. Children with speech sound disorder: Comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills

    Directory of Open Access Journals (Sweden)

    Cristina eMurphy

    2015-02-01

    Full Text Available This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder. A total of 17 children, aged 7-12 years, with speech sound disorder were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2 or phonological intervention group (n = 7, average age 8.6 ± 1.2. The intervention outcomes included auditory-sensory measures (auditory temporal processing skills and cognitive measures (attention, short-term memory, speech production and phonological awareness skills. The auditory approach focused on non-linguistic auditory training (eg. backward masking and frequency discrimination, whereas the phonological approach focused on speech sound training (eg. phonological organisation and awareness. Both interventions consisted of twelve 45-minute sessions delivered twice per week, for a total of nine hours. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills.

  2. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  3. Achilles’ Ear? Inferior Human Short-Term and Recognition Memory in the Auditory Modality

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects’ retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1–4 s). However, at longer retention intervals (8–32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices. PMID:24587119

  4. Auditory Neuropathy

    Science.gov (United States)

    ... with auditory neuropathy have greater impairment in speech perception than hearing health experts would predict based upon their degree of hearing loss on a hearing test. For example, a person with auditory neuropathy may be able to hear ...

  5. EFFECTS OF PHYSICAL REHABILITATION INTEGRATED WITH RHYTHMIC AUDITORY STIMULATION ON SPATIO-TEMPORAL AND KINEMATIC PARAMETERS OF GAIT IN PARKINSON’S DISEASE

    Directory of Open Access Journals (Sweden)

    Massimiliano Pau

    2016-08-01

    Full Text Available Movement rehabilitation by means of physical therapy represents an essential tool in the management of gait disturbances induced by Parkinson’s disease (PD. In this context, the use of Rhythmic Auditory Stimulation (RAS has been proven useful in improving several spatio-temporal parameters, but concerning its effect on gait patterns scarce information is available from a kinematic viewpoint. In this study we used three-dimensional gait analysis based on optoelectronic stereophotogrammetry to investigate the effects of 5 weeks of intensive rehabilitation, which included gait training integrated with RAS on 26 individuals affected by PD (age 70.4±11.1, Hoehn & Yahr 1-3. Gait kinematics was assessed before and at the end of the rehabilitation period and after a three-month follow-up, using concise measures (Gait Profile Score and Gait Variable Score, GPS and GVS, respectively, which are able to describe the deviation from a physiologic gait pattern. The results confirm the effectiveness of gait training assisted by RAS in increasing speed and stride length, in regularizing cadence and correctly reweighting swing/stance phase duration. Moreover, an overall improvement of gait quality was observed, as demonstrated by the significant reduction of the GPS value, which was created mainly through significant decreases in the GVS score associated with the hip flexion-extension movement. Future research should focus on investigating kinematic details to better understand the mechanisms underlying gait disturbances in people with PD and the effects of RAS, with the aim of finding new or improving current rehabilitative treatments.

  6. Real-time fMRI neurofeedback to down-regulate superior temporal gyrus activity in patients with schizophrenia and auditory hallucinations: a proof-of-concept study.

    Science.gov (United States)

    Orlov, Natasza D; Giampietro, Vincent; O'Daly, Owen; Lam, Sheut-Ling; Barker, Gareth J; Rubia, Katya; McGuire, Philip; Shergill, Sukhwinder S; Allen, Paul

    2018-02-12

    Neurocognitive models and previous neuroimaging work posit that auditory verbal hallucinations (AVH) arise due to increased activity in speech-sensitive regions of the left posterior superior temporal gyrus (STG). Here, we examined if patients with schizophrenia (SCZ) and AVH could be trained to down-regulate STG activity using real-time functional magnetic resonance imaging neurofeedback (rtfMRI-NF). We also examined the effects of rtfMRI-NF training on functional connectivity between the STG and other speech and language regions. Twelve patients with SCZ and treatment-refractory AVH were recruited to participate in the study and were trained to down-regulate STG activity using rtfMRI-NF, over four MRI scanner visits during a 2-week training period. STG activity and functional connectivity were compared pre- and post-training. Patients successfully learnt to down-regulate activity in their left STG over the rtfMRI-NF training. Post- training, patients showed increased functional connectivity between the left STG, the left inferior prefrontal gyrus (IFG) and the inferior parietal gyrus. The post-training increase in functional connectivity between the left STG and IFG was associated with a reduction in AVH symptoms over the training period. The speech-sensitive region of the left STG is a suitable target region for rtfMRI-NF in patients with SCZ and treatment-refractory AVH. Successful down-regulation of left STG activity can increase functional connectivity between speech motor and perception regions. These findings suggest that patients with AVH have the ability to alter activity and connectivity in speech and language regions, and raise the possibility that rtfMRI-NF training could present a novel therapeutic intervention in SCZ.

  7. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  8. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  9. Prenatal-choline supplementation differentially modulates timing of auditory and visual stimuli in aged rats.

    Science.gov (United States)

    Cheng, Ruey-Kuang; Scott, Allison C; Penney, Trevor B; Williams, Christina L; Meck, Warren H

    2008-10-27

    Choline supplementation of the maternal diet has a long-term facilitative effect on the interval-timing ability and temporal memory of the offspring. Here, we examined whether prenatal-choline supplementation has modality-specific effects on duration discrimination in aged (20 mo) male rats. Adult offspring of rats that were given sufficient choline in their chow (CON: 1.1 g/kg) or supplemental choline added to their drinking water (SUP: 3.5 g/kg) during embryonic days (ED) 12-17 were trained and tested on a two-modality (auditory and visual signals) duration bisection procedure (2 s vs. 8 s). Intensity (high vs. low) of the auditory and visual timing signals was systematically manipulated across test sessions such that all combinations of signal intensity by modality were tested. Psychometric response functions indicated that prenatal-choline supplementation systematically increased sensitivity to auditory signals relative to visual signals, thereby magnifying the modality effect that sounds are judged to be longer than lights of equivalent duration. In addition, sensitivity to signal duration was greater in rats given prenatal-choline supplementation, particularly at low intensities of both the auditory and visual signals. Overall, these results suggest that prenatal-choline supplementation impacts interval timing by enhancing the differences in temporal integration between auditory and visual stimuli in aged subjects.

  10. Complex-Tone Pitch Discrimination in Listeners With Sensorineural Hearing Loss

    DEFF Research Database (Denmark)

    Bianchi, Federica; Fereczkowski, Michal; Zaar, Johannes

    2016-01-01

    Physiological studies have shown that noise-induced sensorineural hearing loss (SNHL) enhances the amplitude of envelope coding in auditory-nerve fibers. As pitch coding of unresolved complex tones is assumed to rely on temporal envelope coding mechanisms, this study investigated pitch-discrimination...... performance in listeners with SNHL. Pitch-discrimination thresholds were obtained for 14 normal-hearing (NH) and 10 hearing-impaired (HI) listeners for sine-phase (SP) and random-phase (RP) complex tones. When all harmonics were unresolved, the HI listeners performed, on average, worse than NH listeners...... in the RP condition but similarly to NH listeners in the SP condition. The increase in pitch-discrimination performance for the SP relative to the RP condition (F0DL ratio) was significantly larger in the HI as compared with the NH listeners. Cochlear compression and auditory-filter bandwidths were...

  11. Perception of visual apparent motion is modulated by a gap within concurrent auditory glides, even when it is illusory

    Science.gov (United States)

    Wang, Qingcui; Guo, Lu; Bao, Ming; Chen, Lihan

    2015-01-01

    Auditory and visual events often happen concurrently, and how they group together can have a strong effect on what is perceived. We investigated whether/how intra- or cross-modal temporal grouping influenced the perceptual decision of otherwise ambiguous visual apparent motion. To achieve this, we juxtaposed auditory gap transfer illusion with visual Ternus display. The Ternus display involves a multi-element stimulus that can induce either of two different percepts of apparent motion: ‘element motion’ (EM) or ‘group motion’ (GM). In “EM,” the endmost disk is seen as moving back and forth while the middle disk at the central position remains stationary; while in “GM,” both disks appear to move laterally as a whole. The gap transfer illusion refers to the illusory subjective transfer of a short gap (around 100 ms) from the long glide to the short continuous glide when the two glides intercede at the temporal middle point. In our experiments, observers were required to make a perceptual discrimination of Ternus motion in the presence of concurrent auditory glides (with or without a gap inside). Results showed that a gap within a short glide imposed a remarkable effect on separating visual events, and led to a dominant perception of GM as well. The auditory configuration with gap transfer illusion triggered the same auditory capture effect. Further investigations showed that visual interval which coincided with the gap interval (50–230 ms) in the long glide was perceived to be shorter than that within both the short glide and the ‘gap-transfer’ auditory configurations in the same physical intervals (gaps). The results indicated that auditory temporal perceptual grouping takes priority over the cross-modal interaction in determining the final readout of the visual perception, and the mechanism of selective attention on auditory events also plays a role. PMID:26042055

  12. Language discrimination by Java sparrows.

    Science.gov (United States)

    Watanabe, Shigeru; Yamamoto, Erico; Uozumi, Midori

    2006-07-01

    Java sparrows (Padda oryzivora) were trained to discriminate English from Chinese spoken by a bilingual speaker. They could learn discrimination and showed generalization to new sentences spoken by the same speaker and those spoken by a new speaker. Thus, the birds distinguished between English and Chinese. Although auditory cues for the discrimination were not specified, this is the first evidence that non-mammalian species can discriminate human languages.

  13. Active auditory experience in infancy promotes brain plasticity in Theta and Gamma oscillations.

    Science.gov (United States)

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Choudhury, Naseem; Realpe-Bonilla, Teresa; Roesler, Cynthia; Benasich, April A

    2017-08-01

    Language acquisition in infants is driven by on-going neural plasticity that is acutely sensitive to environmental acoustic cues. Recent studies showed that attention-based experience with non-linguistic, temporally-modulated auditory stimuli sharpens cortical responses. A previous ERP study from this laboratory showed that interactive auditory experience via behavior-based feedback (AEx), over a 6-week period from 4- to 7-months-of-age, confers a processing advantage, compared to passive auditory exposure (PEx) or maturation alone (Naïve Control, NC). Here, we provide a follow-up investigation of the underlying neural oscillatory patterns in these three groups. In AEx infants, Standard stimuli with invariant frequency (STD) elicited greater Theta-band (4-6Hz) activity in Right Auditory Cortex (RAC), as compared to NC infants, and Deviant stimuli with rapid frequency change (DEV) elicited larger responses in Left Auditory Cortex (LAC). PEx and NC counterparts showed less-mature bilateral patterns. AEx infants also displayed stronger Gamma (33-37Hz) activity in the LAC during DEV discrimination, compared to NCs, while NC and PEx groups demonstrated bilateral activity in this band, if at all. This suggests that interactive acoustic experience with non-linguistic stimuli can promote a distinct, robust and precise cortical pattern during rapid auditory processing, perhaps reflecting mechanisms that support fine-tuning of early acoustic mapping. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. The Influence of Auditory Information on Visual Size Adaptation.

    Science.gov (United States)

    Tonelli, Alessia; Cuturi, Luigi F; Gori, Monica

    2017-01-01

    Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence) and temporal contextual cues (e.g., adaptation to steady visual stimulation). Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013) in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus) before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.

  15. A Pencil Rescues Impaired Performance on a Visual Discrimination Task in Patients with Medial Temporal Lobe Lesions

    Science.gov (United States)

    Knutson, Ashley R.; Hopkins, Ramona O.; Squire, Larry R.

    2013-01-01

    We tested proposals that medial temporal lobe (MTL) structures support not just memory but certain kinds of visual perception as well. Patients with hippocampal lesions or larger MTL lesions attempted to identify the unique object among twin pairs of objects that had a high degree of feature overlap. Patients were markedly impaired under the more…

  16. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  17. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. On the relations among temporal integration for loudness, loudness discrimination, and the form of the loudness function. (A)

    DEFF Research Database (Denmark)

    Poulsen, Torben; Buus, Søren; Florentine, M

    1996-01-01

    markedly with level and is largest at moderate levels. The effect of level increases as the duration of the short stimulus decreases and is largest for comparisons between the 2- and 250-ms tones. The loudness-level jnds are also largest at moderate levels and, contrary to traditional jnds for the level...... of two equal-duration tones, they do not appear to depend on duration. The level dependence of temporal integration and the loudness jnds are consistent with a loudness function [log(loudness) versus SPL] that is flatter at moderate levels than at low and high levels. [Work supported by NIH-NIDCD R01DC...

  19. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations.

    Science.gov (United States)

    Ćurčić-Blake, Branislava; Ford, Judith M; Hubl, Daniela; Orlov, Natasza D; Sommer, Iris E; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W; David, Olivier; Mulert, Christoph; Woodward, Todd S; Aleman, André

    2017-01-01

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of particular relevance. However, reconciliation of these theories with experimental evidence is missing. We review 50 studies investigating functional (EEG and fMRI) and anatomic (diffusion tensor imaging) connectivity in these networks, and explore the evidence supporting abnormal connectivity in these networks associated with AVH. We distinguish between functional connectivity during an actual hallucination experience (symptom capture) and functional connectivity during either the resting state or a task comparing individuals who hallucinate with those who do not (symptom association studies). Symptom capture studies clearly reveal a pattern of increased coupling among the auditory, language and striatal regions. Anatomical and symptom association functional studies suggest that the interhemispheric connectivity between posterior auditory regions may depend on the phase of illness, with increases in non-psychotic individuals and first episode patients and decreases in chronic patients. Leading hypotheses involving concepts as unstable memories, source monitoring, top-down attention, and hybrid models of hallucinations are supported in part by the published connectivity data, although several caveats and inconsistencies remain. Specifically, possible changes in fronto-temporal connectivity are still under debate. Precise hypotheses concerning the directionality of connections deduced from current theoretical approaches should be tested using experimental approaches that allow for discrimination of competing hypotheses. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Discrimination rice cropping systems using multi-temporal Proba-V data in the Mekong Delta, Vietnam

    Science.gov (United States)

    Son, Nguyen-Thanh; Chen, Chi-Farn; Chen, Cheng-Ru; Chang, Ly-Yu; Chiang, Shou-Hao; Lau, Khin-Va

    2016-04-01

    Rice is considered a main source of livelihoods for several billions of people worldwide and plays an important role in the economy of many Asian countries. More than just a food source, rice production is regarded as one of the most important components to maintaining political stability and is also a national subject of economic policy due to domestic food consumption and grain exports. Vietnam is globally one of the largest rice producers and suppliers with more than 80% of the exported rice amount produced from the Mekong River Delta. This delta is one of the three deltas in the world most vulnerable to the climate change, causing the potential loss of rice yields. Thus, spatiotemporal information of rice cropping systems is important for agricultural management to ensure food security and rice grain exports. Coarse resolution satellite data such as MODIS demonstrates the applicability for rice mapping at a large scale. However, the use of MODIS data for such a monitoring purpose still reveals a challenging task due to mixed-pixel issues. The Proba-V satellite launched on 7 May 2013 is a potential candidate for this monitoring purpose because the data include four spectral bands (blue, red, near-infrared and mid-infrared) with a swath of 2,285 km with a spatial resolution of 100 m and temporal resolution of 5 days. This study aimed to investigate the applicability of multi-temporal Proba-V data for mapping rice cropping systems in Mekong Delta River, South Vietnam. The data were processed for 2014-2015 rice cropping seasons, following three main steps: (1) construction of smooth time-series NDVI data, (2) classification of rice cropping systems using crop phenological metrics, and (3) accuracy assessment of the mapping results. The results indicated that the smooth time-series NDVI profiles characterized the temporal spectral responses of rice fields through different growing stages of rice plant, which was critically important for understanding rice crop

  1. Avaliação do processamento auditivo e da discriminação fonêmica em crianças com desenvolvimento fonológico normal e desviante Evaluation of auditory processing and phonemic discrimination in children with normal and disordered phonological development

    Directory of Open Access Journals (Sweden)

    Tiago Mendonça Attoni

    2010-12-01

    Full Text Available O processamento auditivo e a discriminação fonêmica são imprescindíveis para o processo comunicativo. Tipo de Estudo: retrospectivo. OBJETIVO: Analisar as respostas encontradas na avaliação do processamento auditivo e da discriminação fonêmica em crianças com desenvolvimento normal de fala e com desvio fonológico. MATERIAL E MÉTODO: Este estudo constitui-se da avaliação de 46 crianças, sendo 22 com desvio fonológico e 24 com desenvolvimento normal de fala. Foram aplicados os testes de escuta diótica, monótica e dicótica para avaliar o processamento auditivo e um teste que avalia a capacidade de discriminação fonêmica. DESENHO CIENTÍFICO: Transversal, contemporâneo. RESULTADOS: As crianças normais obtiveram valores considerados normais em todos os testes do processamento auditivo e índices máximos no teste de discriminação fonêmica. As crianças com desvio fonológico foram piores neste último, além de apresentarem alterações no processamento auditivo. CONCLUSÃO: Crianças com desvio fonológico apresentam alterações de processamento auditivo e discriminação fonêmica.Auditory processing and phonemic discrimination are essential for communication. Type of study: Retrospective. AIM: To evaluate auditory processing and phonemic discrimination in children with normal and disordered phonological development. MATERIAL AND METHODS: An evaluation of 46 children was carried out: 22 had phonological disorders and 24 had normally developing speech. Diotic , monotic and dichotic tests were applied to assess auditory processing and a test to evaluate phonemic discrimination abilities. DESIGN: Cross-sectional, contemporary. RESULTS: The values of normally-developing children were within the normal range in all auditory processing tests; these children attained maximum phonemic discrimination test scores. Children with phonological disorders performed worse in the latter, and presented disordered auditory processing

  2. The power of two-dimensional dwell-time analysis for model discrimination, temporal resolution, multichannel analysis and level detection.

    Science.gov (United States)

    Huth, Tobias; Schroeder, Indra; Hansen, Ulf-Peter

    2006-01-01

    Two-dimensional (2D) dwell-time analysis of time series of single-channel patch-clamp current was improved by employing a Hinkley detector for jump detection, introducing a genetic fit algorithm, replacing maximum likelihood by a least square criterion, averaging over a field of 9 or 25 bins in the 2D plane and normalizing per measuring time, not per events. Using simulated time series for the generation of the "theoretical" 2D histograms from assumed Markov models enabled the incorporation of the measured filter response and noise. The effects of these improvements were tested with respect to the temporal resolution, accuracy of the determination of the rate constants of the Markov model, sensitivity to noise and requirement of open time and length of the time series. The 2D fit was better than the classical hidden Markov model (HMM) fit in all tested fields. The temporal resolution of the two most efficient algorithms, the 2D fit and the subsequent HMM/beta fit, enabled the determination of rate constants 10 times faster than the corner frequency of the low-pass filter. The 2D fit was much less sensitive to noise. The requirement of computing time is a problem of the 2D fit (100 times that of the HMM fit) but can now be handled by personal computers. The studies revealed a fringe benefit of 2D analysis: it can reveal the "true" single-channel current when the filter has reduced the apparent current level by averaging over undetected fast gating.

  3. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  4. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  5. Auditory Evoked Responses in Neonates by MEG

    International Nuclear Information System (INIS)

    Hernandez-Pavon, J. C.; Sosa, M.; Lutter, W. J.; Maier, M.; Wakai, R. T.

    2008-01-01

    Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age

  6. Auditory Pattern Memory and Group Signal Detection

    National Research Council Canada - National Science Library

    Sorkin, Robert

    1997-01-01

    .... The experiments with temporally-coded auditory patterns showed how listeners' attention is influenced by the position and the amount of information carried by different segments of the pattern...

  7. Multisensory integration of dynamic faces and voices in rhesus monkey auditory cortex.

    Science.gov (United States)

    Ghazanfar, Asif A; Maier, Joost X; Hoffman, Kari L; Logothetis, Nikos K

    2005-05-18

    In the social world, multiple sensory channels are used concurrently to facilitate communication. Among human and nonhuman primates, faces and voices are the primary means of transmitting social signals (Adolphs, 2003; Ghazanfar and Santos, 2004). Primates recognize the correspondence between species-specific facial and vocal expressions (Massaro, 1998; Ghazanfar and Logothetis, 2003; Izumi and Kojima, 2004), and these visual and auditory channels can be integrated into unified percepts to enhance detection and discrimination. Where and how such communication signals are integrated at the neural level are poorly understood. In particular, it is unclear what role "unimodal" sensory areas, such as the auditory cortex, may play. We recorded local field potential activity, the signal that best correlates with human imaging and event-related potential signals, in both the core and lateral belt regions of the auditory cortex in awake behaving rhesus monkeys while they viewed vocalizing conspecifics. We demonstrate unequivocally that the primate auditory cortex integrates facial and vocal signals through enhancement and suppression of field potentials in both the core and lateral belt regions. The majority of these multisensory responses were specific to face/voice integration, and the lateral belt region shows a greater frequency of multisensory integration than the core region. These multisensory processes in the auditory cortex likely occur via reciprocal interactions with the superior temporal sulcus.

  8. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  9. Carbon isotope composition of latex does not reflect temporal variations of photosynthetic carbon isotope discrimination in rubber trees (Hevea brasiliensis).

    Science.gov (United States)

    Kanpanon, Nicha; Kasemsap, Poonpipope; Thaler, Philippe; Kositsup, Boonthida; Gay, Frédéric; Lacote, Régis; Epron, Daniel

    2015-11-01

    Latex, the cytoplasm of laticiferous cells localized in the inner bark of rubber trees (Hevea brasiliensis Müll. Arg.), is collected by tapping the bark. Following tapping, latex flows out of the trunk and is regenerated, whereas in untapped trees, there is no natural exudation. It is still unknown whether the carbohydrates used for latex regeneration in tapped trees is coming from recent photosynthates or from stored carbohydrates, and in the former case, it is expected that latex carbon isotope composition of tapped trees will vary seasonally, whereas latex isotope composition of untapped trees will be more stable. Temporal variations of carbon isotope composition of trunk latex (δ(13)C-L), leaf soluble compounds (δ(13)C-S) and bulk leaf material (δ(13)C-B) collected from tapped and untapped 20-year-old trees were compared. A marked difference in δ(13)C-L was observed between tapped and untapped trees whatever the season. Trunk latex from tapped trees was more depleted (1.6‰ on average) with more variable δ(13)C values than those of untapped trees. δ(13)C-L was higher and more stable across seasons than δ(13)C-S and δ(13)C-B, with a maximum seasonal difference of 0.7‰ for tapped trees and 0.3‰ for untapped trees. δ(13)C-B was lower in tapped than in untapped trees, increasing from August (middle of the rainy season) to April (end of the dry season). Differences in δ(13)C-L and δ(13)C-B between tapped and untapped trees indicated that tapping affects the metabolism of both laticiferous cells and leaves. The lack of correlation between δ(13)C-L and δ(13)C-S suggests that recent photosynthates are mixed in the large pool of stored carbohydrates that are involved in latex regeneration after tapping. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  11. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  12. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in the inner ear, or cochlea. The model was shown to account for various aspects of spectro-temporal processing and perception in tasks of intensity discrimination, tone-in-noise detection, forward masking, spectral masking and amplitude modulation detection. Secondly, a series of experiments was performed......-output functions, frequency selectivity, intensity discrimination limens and effects of simultaneous- and forward masking. Part of the measured data was used to adjust the parameters of the stages in the model, that simulate the cochlear processing. The remaining data were used to evaluate the fitted models....... It was shown that most observations in the measured consonant discrimination error patterns were predicted by the model, although error rates were systematically underestimated by the model in few particular acoustic-phonetic features. These results reflect a relation between basic auditory processing deficits...

  13. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  14. The Phoneme Identification Test for Assessment of Spectral and Temporal Discrimination Skills in Children: Development, Normative Data, and Test-Retest Reliability Studies.

    Science.gov (United States)

    Cameron, Sharon; Chong-White, Nicky; Mealings, Kiri; Beechey, Tim; Dillon, Harvey; Young, Taegan

    2018-02-01

    Previous research suggests that a proportion of children experiencing reading and listening difficulties may have an underlying primary deficit in the way that the central auditory nervous system analyses the perceptually important, rapidly varying, formant frequency components of speech. The Phoneme Identification Test (PIT) was developed to investigate the ability of children to use spectro-temporal cues to perceptually categorize speech sounds based on their rapidly changing formant frequencies. The PIT uses an adaptive two-alternative forced-choice procedure whereby the participant identifies a synthesized consonant-vowel (CV) (/ba/ or /da/) syllable. CV syllables differed only in the second formant (F2) frequency along an 11-step continuum (between 0% and 100%-representing an ideal /ba/ and /da/, respectively). The CV syllables were presented in either quiet (PIT Q) or noise at a 0 dB signal-to-noise ratio (PIT N). Development of the PIT stimuli and test protocols, and collection of normative and test-retest reliability data. Twelve adults (aged 23 yr 10 mo to 50 yr 9 mo, mean 32 yr 5 mo) and 137 typically developing, primary-school children (aged 6 yr 0 mo to 12 yr 4 mo, mean 9 yr 3 mo). There were 73 males and 76 females. Data were collected using a touchscreen computer. Psychometric functions were automatically fit to individual data by the PIT software. Performance was determined by the width of the continuum for which responses were neither clearly /ba/ nor /da/ (referred to as the uncertainty region [UR]). A shallower psychometric function slope reflected greater uncertainty. Age effects were determined based on raw scores. Z scores were calculated to account for the effect of age on performance. Outliers, and individual data for which the confidence interval of the UR exceeded a maximum allowable value, were removed. Nonparametric tests were used as the data were skewed toward negative performance. Across participants, the median value of the F2 range

  15. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  16. Auditory memory for random time patterns.

    Science.gov (United States)

    Kang, HiJee; Agus, Trevor R; Pressnitzer, Daniel

    2017-10-01

    The acquisition of auditory memory for temporal patterns was investigated. The temporal patterns were random sequences of irregularly spaced clicks. Participants performed a task previously used to study auditory memory for noise [Agus, Thorpe, and Pressnitzer (2010). Neuron 66, 610-618]. The memory for temporal patterns displayed strong similarities with the memory for noise: temporal patterns were learnt rapidly, in an unsupervised manner, and could be distinguished from statistically matched patterns after learning. There was, however, a qualitative difference from the memory for noise. For temporal patterns, no memory transfer was observed after time reversals, showing that both the time intervals and their order were represented in memory. Remarkably, learning was observed over a broad range of time scales, which encompassed rhythm-like and buzz-like temporal patterns. Temporal patterns present specific challenges to the neural mechanisms of plasticity, because the information to be learnt is distributed over time. Nevertheless, the present data show that the acquisition of novel auditory memories can be as efficient for temporal patterns as for sounds containing additional spectral and spectro-temporal cues, such as noise. This suggests that the rapid formation of memory traces may be a general by-product of repeated auditory exposure.

  17. Improving Dorsal Stream Function in Dyslexics By Training Figure/Ground Motion Discrimination Improves Reading Fluency, Attention, and Working Memory

    Directory of Open Access Journals (Sweden)

    Teri Lawton

    2016-08-01

    Full Text Available There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average, two targeting the temporal dynamics (timing of either the auditory or visual pathways with a third reading intervention (control group using linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  18. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory

    Science.gov (United States)

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning. PMID:27551263

  19. Improving Dorsal Stream Function in Dyslexics by Training Figure/Ground Motion Discrimination Improves Attention, Reading Fluency, and Working Memory.

    Science.gov (United States)

    Lawton, Teri

    2016-01-01

    There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) targeting linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

  20. Congenital amusia: a disorder of fine-grained pitch discrimination.

    Science.gov (United States)

    Peretz, Isabelle; Ayotte, Julie; Zatorre, Robert J; Mehler, Jacques; Ahad, Pierre; Penhune, Virginia B; Jutras, Benoît

    2002-01-17

    We report the first documented case of congenital amusia. This disorder refers to a musical disability that cannot be explained by prior brain lesion, hearing loss, cognitive deficits, socioaffective disturbance, or lack of environmental stimulation. This musical impairment is diagnosed in a middle-aged woman, hereafter referred to as Monica, who lacks most basic musical abilities, including melodic discrimination and recognition, despite normal audiometry and above-average intellectual, memory, and language skills. The results of psychophysical tests show that Monica has severe difficulties with detecting pitch changes. The data suggest that music-processing difficulties may result from problems in fine-grained discrimination of pitch, much in the same way as many language-processing difficulties arise from deficiencies in auditory temporal resolution.

  1. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  2. Predicting Future Reading Problems Based on Pre-reading Auditory Measures: A Longitudinal Study of Children with a Familial Risk of Dyslexia.

    Science.gov (United States)

    Law, Jeremy M; Vandermosten, Maaike; Ghesquière, Pol; Wouters, Jan

    2017-01-01

    Purpose: This longitudinal study examines measures of temporal auditory processing in pre-reading children with a family risk of dyslexia. Specifically, it attempts to ascertain whether pre-reading auditory processing, speech perception, and phonological awareness (PA) reliably predict later literacy achievement. Additionally, this study retrospectively examines the presence of pre-reading auditory processing, speech perception, and PA impairments in children later found to be literacy impaired. Method: Forty-four pre-reading children with and without a family risk of dyslexia were assessed at three time points (kindergarten, first, and second grade). Auditory processing measures of rise time (RT) discrimination and frequency modulation (FM) along with speech perception, PA, and various literacy tasks were assessed. Results: Kindergarten RT uniquely contributed to growth in literacy in grades one and two, even after controlling for letter knowledge and PA. Highly significant concurrent and predictive correlations were observed with kindergarten RT significantly predicting first grade PA. Retrospective analysis demonstrated atypical performance in RT and PA at all three time points in children who later developed literacy impairments. Conclusions: Although significant, kindergarten auditory processing contributions to later literacy growth lack the power to be considered as a single-cause predictor; thus results support temporal processing deficits' contribution within a multiple deficit model of dyslexia.

  3. Visual and Auditory Sensitivities and Discriminations

    National Research Council Canada - National Science Library

    Regan, David

    2003-01-01

    .... A new equation gives TTC from binocular information without involving distance. The human visual system contains a mechanism that rapidly compares contours at two distant sites so as to encode the location size and shape of an object...

  4. Representation of dynamic interaural phase difference in auditory cortex of awake rhesus macaques.

    Science.gov (United States)

    Scott, Brian H; Malone, Brian J; Semple, Malcolm N

    2009-04-01

    Neurons in auditory cortex of awake primates are selective for the spatial location of a sound source, yet the neural representation of the binaural cues that underlie this tuning remains undefined. We examined this representation in 283 single neurons across the low-frequency auditory core in alert macaques, trained to discriminate binaural cues for sound azimuth. In response to binaural beat stimuli, which mimic acoustic motion by modulating the relative phase of a tone at the two ears, these neurons robustly modulate their discharge rate in response to this directional cue. In accordance with prior studies, the preferred interaural phase difference (IPD) of these neurons typically corresponds to azimuthal locations contralateral to the recorded hemisphere. Whereas binaural beats evoke only transient discharges in anesthetized cortex, neurons in awake cortex respond throughout the IPD cycle. In this regard, responses are consistent with observations at earlier stations of the auditory pathway. Discharge rate is a band-pass function of the frequency of IPD modulation in most neurons (73%), but both discharge rate and temporal synchrony are independent of the direction of phase modulation. When subjected to a receiver operator characteristic analysis, the responses of individual neurons are insufficient to account for the perceptual acuity of these macaques in an IPD discrimination task, suggesting the need for neural pooling at the cortical level.

  5. Noise differentially impacts phoneme representations in the auditory and speech motor systems.

    Science.gov (United States)

    Du, Yi; Buchsbaum, Bradley R; Grady, Cheryl L; Alain, Claude

    2014-05-13

    Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.

  6. From bird to sparrow: Learning-induced modulations in fine-grained semantic discrimination.

    Science.gov (United States)

    De Meo, Rosanna; Bourquin, Nathalie M-P; Knebel, Jean-François; Murray, Micah M; Clarke, Stephanie

    2015-09-01

    Recognition of environmental sounds is believed to proceed through discrimination steps from broad to more narrow categories. Very little is known about the neural processes that underlie fine-grained discrimination within narrow categories or about their plasticity in relation to newly acquired expertise. We investigated how the cortical representation of birdsongs is modulated by brief training to recognize individual species. During a 60-minute session, participants learned to recognize a set of birdsongs; they improved significantly their performance for trained (T) but not control species (C), which were counterbalanced across participants. Auditory evoked potentials (AEPs) were recorded during pre- and post-training sessions. Pre vs. post changes in AEPs were significantly different between T and C i) at 206-232ms post stimulus onset within a cluster on the anterior part of the left superior temporal gyrus; ii) at 246-291ms in the left middle frontal gyrus; and iii) 512-545ms in the left middle temporal gyrus as well as bilaterally in the cingulate cortex. All effects were driven by weaker activity for T than C species. Thus, expertise in discriminating T species modulated early stages of semantic processing, during and immediately after the time window that sustains the discrimination between human vs. animal vocalizations. Moreover, the training-induced plasticity is reflected by the sharpening of a left lateralized semantic network, including the anterior part of the temporal convexity and the frontal cortex. Training to identify birdsongs influenced, however, also the processing of C species, but at a much later stage. Correct discrimination of untrained sounds seems to require an additional step which results from lower-level features analysis such as apperception. We therefore suggest that the access to objects within an auditory semantic category is different and depends on subject's level of expertise. More specifically, correct intra

  7. A songbird forebrain area potentially involved in auditory ...

    Indian Academy of Sciences (India)

    These activity-dependent changes may underlie long-term modifications in the functional performance of NCM and constitute a potential neural substrate for auditory discrimination. We end this review by discussing evidence that suggests that NCM may be a site of auditory memory formation and/or storage.

  8. Auditory stimulus timing influences perceived duration of co-occurring visual stimuli

    Directory of Open Access Journals (Sweden)

    Vincenzo eRomei

    2011-09-01

    Full Text Available There is increasing interest in multisensory influences upon sensory-specific judgements, such as when auditory stimuli affect visual perception. Here we studied whether the duration of an auditory event can objectively affect the perceived duration of a co-occurring visual event. On each trial, participants were presented with a pair of successive flashes and had to judge whether the first or second was longer. Two beeps were presented with the flashes. The order of short and long stimuli could be the same across audition and vision (audiovisual congruent or reversed, so that the longer flash was accompanied by the shorter beep and vice versa (audiovisual incongruent; or the two beeps could have the same duration as each other. Beeps and flashes could onset synchronously or asynchronously. In a further control experiment, the beep durations were much longer (tripled than the flashes. Results showed that visual duration-discrimination sensitivity (d' was significantly higher for congruent (and significantly lower for incongruent audiovisual synchronous combinations, relative to the visual only presentation. This effect was abolished when auditory and visual stimuli were presented asynchronously, or when sound durations tripled those of flashes. We conclude that the temporal properties of co-occurring auditory stimuli influence the perceived duration of visual stimuli and that this can reflect genuine changes in visual sensitivity rather than mere response bias.

  9. Silicon auditory processors as computer peripherals.

    Science.gov (United States)

    Lazzaro, J; Wawrzynek, J; Mahowald, M; Sivilotti, M; Gillespie, D

    1993-01-01

    Several research groups are implementing analog integrated circuit models of biological auditory processing. The outputs of these circuit models have taken several forms, including video format for monitor display, simple scanned output for oscilloscope display, and parallel analog outputs suitable for data-acquisition systems. Here, an alternative output method for silicon auditory models, suitable for direct interface to digital computers, is described. As a prototype of this method, an integrated circuit model of temporal adaptation in the auditory nerve that functions as a peripheral to a workstation running Unix is described. Data from a working hybrid system that includes the auditory model, a digital interface, and asynchronous software are given. This system produces a real-time X-window display of the response of the auditory nerve model.

  10. Music lessons improve auditory perceptual and cognitive performance in deaf children

    Directory of Open Access Journals (Sweden)

    Françoise eROCHETTE

    2014-07-01

    Full Text Available Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5 to 4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically-trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  11. Music lessons improve auditory perceptual and cognitive performance in deaf children.

    Science.gov (United States)

    Rochette, Françoise; Moussard, Aline; Bigand, Emmanuel

    2014-01-01

    Despite advanced technologies in auditory rehabilitation of profound deafness, deaf children often exhibit delayed cognitive and linguistic development and auditory training remains a crucial element of their education. In the present cross-sectional study, we assess whether music would be a relevant tool for deaf children rehabilitation. In normal-hearing children, music lessons have been shown to improve cognitive and linguistic-related abilities, such as phonetic discrimination and reading. We compared auditory perception, auditory cognition, and phonetic discrimination between 14 profoundly deaf children who completed weekly music lessons for a period of 1.5-4 years and 14 deaf children who did not receive musical instruction. Children were assessed on perceptual and cognitive auditory tasks using environmental sounds: discrimination, identification, auditory scene analysis, auditory working memory. Transfer to the linguistic domain was tested with a phonetic discrimination task. Musically trained children showed better performance in auditory scene analysis, auditory working memory and phonetic discrimination tasks, and multiple regressions showed that success on these tasks was at least partly driven by music lessons. We propose that musical education contributes to development of general processes such as auditory attention and perception, which, in turn, facilitate auditory-related cognitive and linguistic processes.

  12. Effects of Multimodal Presentation and Stimulus Familiarity on Auditory and Visual Processing

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2010-01-01

    Two experiments examined the effects of multimodal presentation and stimulus familiarity on auditory and visual processing. In Experiment 1, 10-month-olds were habituated to either an auditory stimulus, a visual stimulus, or an auditory-visual multimodal stimulus. Processing time was assessed during the habituation phase, and discrimination of…

  13. Early Visual Deprivation Severely Compromises the Auditory Sense of Space in Congenitally Blind Children

    Science.gov (United States)

    Vercillo, Tiziana; Burr, David; Gori, Monica

    2016-01-01

    A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…

  14. Language processing of auditory cortex revealed by functional magnetic resonance imaging in presbycusis patients.

    Science.gov (United States)

    Chen, Xianming; Wang, Maoxin; Deng, Yihong; Liang, Yonghui; Li, Jianzhong; Chen, Shiyan

    2016-01-01

    Contralateral temporal lobe activation decreases with aging, regardless of hearing status, with elderly individuals showing reduced right ear advantage. Aging and hearing loss possibly lead to presbycusis speech discrimination decline. To evaluate presbycusis patients' auditory cortex activation under verbal stimulation. Thirty-six patients were enrolled: 10 presbycusis patients (mean age = 64 years, range = 60-70), 10 in the healthy aged group (mean age = 66 years, range = 60-70), and 16 young healthy volunteers (mean age = 25 years, range = 23-28). These three groups underwent simultaneous 1 kHz and 90 dB single-syllable word stimuli and (blood-oxygen-level-dependent functional magnetic resonance imaging) BOLD fMRI examinations. The main activation regions were superior temporal and middle temporal gyrus. For all aged subjects, the right region of interest (ROI) activation volume was decreased compared with the young group. With left ear stimulation, bilateral ROI activation intensity held. With right ear stimulation, the aged group's activation intensity was higher. Using monaural stimulation in the young group, contralateral temporal lobe activation volume and intensity were higher vs ipsilateral, while they were lower in the aged and presbycusis groups. On left and right ear auditory tasks, the young group showed right ear advantage, while the aged and presbycusis groups showed reduced right ear advantage.

  15. The use of listening devices to ameliorate auditory deficit in children with autism.

    Science.gov (United States)

    Rance, Gary; Saunders, Kerryn; Carew, Peter; Johansson, Marlin; Tan, Johanna

    2014-02-01

    To evaluate both monaural and binaural processing skills in a group of children with autism spectrum disorder (ASD) and to determine the degree to which personal frequency modulation (radio transmission) (FM) listening systems could ameliorate their listening difficulties. Auditory temporal processing (amplitude modulation detection), spatial listening (integration of binaural difference cues), and functional hearing (speech perception in background noise) were evaluated in 20 children with ASD. Ten of these subsequently underwent a 6-week device trial in which they wore the FM system for up to 7 hours per day. Auditory temporal processing and spatial listening ability were poorer in subjects with ASD than in matched controls (temporal: P = .014 [95% CI -6.4 to -0.8 dB], spatial: P = .003 [1.0 to 4.4 dB]), and performance on both of these basic processing measures was correlated with speech perception ability (temporal: r = -0.44, P = .022; spatial: r = -0.50, P = .015). The provision of FM listening systems resulted in improved discrimination of speech in noise (P Communication (P = .019 [-40.1% to -5.0%]). Eight of the 10 participants who undertook the 6-week device trial remained consistent FM users at study completion. Sustained use of FM listening devices can enhance speech perception in noise, aid social interaction, and improve educational outcomes in children with ASD. Copyright © 2014 Mosby, Inc. All rights reserved.

  16. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  17. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...

  18. Effect of conductive hearing loss on central auditory function

    Directory of Open Access Journals (Sweden)

    Arash Bayat

    Full Text Available Abstract Introduction: It has been demonstrated that long-term Conductive Hearing Loss (CHL may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP. It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. Objective: This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. Methods: During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control, aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. Results: The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p = 0.004; left: p 0.05. Conclusion: The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended.

  19. Effect of conductive hearing loss on central auditory function.

    Science.gov (United States)

    Bayat, Arash; Farhadi, Mohammad; Emamdjomeh, Hesam; Saki, Nader; Mirmomeni, Golshan; Rahim, Fakher

    It has been demonstrated that long-term Conductive Hearing Loss (CHL) may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP). It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control), aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN) test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p=0.004; left: phearing for both sides (phearing loss in either group (p>0.05). The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Semantic discrimination in nonspeaking youngsters with moderate or severe retardation: electrophysiological correlates.

    Science.gov (United States)

    Molfese, D L; Morris, R D; Romski, M A

    1990-01-01

    Recent studies have used auditory evoked response (AER) procedures to study word meaning in young infants. The present study represents an initial application of these procedures to nonspeaking subjects with moderate or severe mental retardation. AERs were recorded from electrodes placed on the scalp over frontal, temporal, and parietal regions of the left and right hemispheres. As six symbol-experienced subjects viewed visual-graphic symbols (lexigrams), a series of probe tones were presented to elicit the AERs. Half of the symbols were meaningful to the subjects. AER activity recorded from the left hemisphere frontal and temporal electrode sites discriminated between the meaningful and meaningless symbols. Discriminant function analyses indicated that the wave-forms could be correctly classified in terms of the evoking stimulus with greater than 80% accuracy. These findings support the usefulness of AERs for studying the neurolinguistic processes underlying behavioral measures of language performance of difficult-to-assess populations.

  1. Auditory perception in individuals with Friedreich's ataxia.

    Science.gov (United States)

    Rance, Gary; Corben, Louise; Barker, Elizabeth; Carew, Peter; Chisari, Donella; Rogers, Meghan; Dowell, Richard; Jamaluddin, Saiful; Bryson, Rochelle; Delatycki, Martin B

    2010-01-01

    Friedreich's ataxia (FRDA) is an inherited ataxia with a range of progressive features including axonal degeneration of sensory nerves. The aim of this study was to investigate auditory perception in affected individuals. Fourteen subjects with genetically defined FRDA participated. Two control groups, one consisting of healthy, normally hearing individuals and another comprised of subjects with sensorineural hearing loss, were also assessed. Auditory processing was evaluated using structured tasks designed to reveal the listeners' ability to perceive temporal and spectral cues. Findings were then correlated with open-set speech understanding. Nine of 14 individuals with FRDA showed evidence of auditory processing disorder. Gap and amplitude modulation detection levels in these subjects were significantly elevated, indicating impaired encoding of rapid signal changes. Electrophysiologic findings (auditory brainstem response, ABR) also reflected disrupted neural activity. Speech understanding was significantly affected in these listeners and the degree of disruption was related to temporal processing ability. Speech analyses indicated that timing cues (notably consonant voice onset time and vowel duration) were most affected. The results suggest that auditory pathway abnormality is a relatively common consequence of FRDA. Regular auditory evaluation should therefore be part of the management regime for all affected individuals. This assessment should include both ABR testing, which can provide insights into the degree to which auditory neural activity is disrupted, and some functional measure of hearing capacity such as speech perception assessment, which can quantify the disorder and provide a basis for intervention. Copyright 2009 S. Karger AG, Basel.

  2. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Recurrence of task set-related MEG signal patterns during auditory working memory.

    Science.gov (United States)

    Peters, Benjamin; Bledowski, Christoph; Rieder, Maria; Kaiser, Jochen

    2016-06-01

    Processing of auditory spatial and non-spatial information in working memory has been shown to rely on separate cortical systems. While previous studies have demonstrated differences in spatial versus non-spatial processing from the encoding of to-be-remembered stimuli onwards, here we investigated whether such differences would be detectable already prior to presentation of the sample stimulus. We analyzed broad-band magnetoencephalography data from 15 healthy adults during an auditory working memory paradigm starting with a visual cue indicating the task-relevant stimulus feature for a given trial (lateralization or pitch) and a subsequent 1.5-s pre-encoding phase. This was followed by a sample sound (0.2s), the delay phase (0.8s) and a test stimulus (0.2s) after which participants made a match/non-match decision. Linear discriminant functions were trained to decode task-specific signal patterns throughout the task, and temporal generalization was used to assess whether the neural codes discriminating between the tasks during the pre-encoding phase would recur during later task periods. The spatial versus non-spatial tasks could indeed be discriminated after the onset of the cue onwards, and decoders trained during the pre-encoding phase successfully discriminated the tasks during both sample stimulus encoding and during the delay phase. This demonstrates that task-specific neural codes are established already before the memorandum is presented and that the same patterns are reestablished during stimulus encoding and maintenance. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Discrimination in Autism within Different Sensory Modalities

    Science.gov (United States)

    O'Riordan, Michelle; Passetti, Filippo

    2006-01-01

    Recent studies have suggested that unusual visual processing in autism might stem from enhanced visual discrimination. Although there are also many anecdotal reports of auditory and tactile processing disturbances in autism these have received comparatively little attention. It is possible that the enhanced discrimination ability in vision in…

  5. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Perisaccadic localization of auditory stimuli.

    Science.gov (United States)

    Klingenhoefer, Steffen; Bremmer, Frank

    2009-09-01

    Interaction with the outside world requires the knowledge about where objects are with respect to one's own body. Such spatial information is represented in various topographic maps in different sensory systems. From a computational point of view, however, a single, modality-invariant map of the incoming sensory signals appears to be a more efficient strategy for spatial representations. If such a single supra-modal map existed and were used for perceptual purposes, localization characteristics should be similar across modalities. Previous studies had shown mislocalization of brief visual stimuli presented in the temporal vicinity of saccadic eye-movements. Here, we tested, if such mislocalizations could also be found for auditory stimuli. We presented brief noise bursts before, during, and after visually guided saccades. Indeed, we found localization errors for these auditory stimuli. The spatio-temporal pattern of this mislocalization, however, clearly differed from the one found for visual stimuli. The spatial error also depended on the exact type of eye-movement (visually guided vs. memory guided saccades). Finally, results obtained in fixational control paradigms under different conditions suggest that auditory localization can be strongly influenced by both static and dynamic visual stimuli. Visual localization on the other hand is not influenced by distracting visual stimuli but can be inaccurate in the temporal vicinity of eye-movements. Taken together, our results argue against a single, modality-independent spatial representation of sensory signals.

  7. Developmental hearing loss impedes auditory task learning and performance in gerbils

    Science.gov (United States)

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N.; Sanes, Dan H.

    2016-01-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. PMID:27746215

  8. Developmental hearing loss impedes auditory task learning and performance in gerbils.

    Science.gov (United States)

    von Trapp, Gardiner; Aloni, Ishita; Young, Stephen; Semple, Malcolm N; Sanes, Dan H

    2017-04-01

    The consequences of developmental hearing loss have been reported to include both sensory and cognitive deficits. To investigate these issues in a non-human model, auditory learning and asymptotic psychometric performance were compared between normal hearing (NH) adult gerbils and those reared with conductive hearing loss (CHL). At postnatal day 10, before ear canal opening, gerbil pups underwent bilateral malleus removal to induce a permanent CHL. Both CHL and control animals were trained to approach a water spout upon presentation of a target (Go stimuli), and withhold for foils (Nogo stimuli). To assess the rate of task acquisition and asymptotic performance, animals were tested on an amplitude modulation (AM) rate discrimination task. Behavioral performance was calculated using a signal detection theory framework. Animals reared with developmental CHL displayed a slower rate of task acquisition for AM discrimination task. Slower acquisition was explained by an impaired ability to generalize to newly introduced stimuli, as compared to controls. Measurement of discrimination thresholds across consecutive testing blocks revealed that CHL animals required a greater number of testing sessions to reach asymptotic threshold values, as compared to controls. However, with sufficient training, CHL animals approached control performance. These results indicate that a sensory impediment can delay auditory learning, and increase the risk of poor performance on a temporal task. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Auditory Training for Children with Processing Disorders.

    Science.gov (United States)

    Katz, Jack; Cohen, Carolyn F.

    1985-01-01

    The article provides an overview of central auditory processing (CAP) dysfunction and reviews research on approaches to improve perceptual skills; to provide discrimination training for communicative and reading disorders; to increase memory and analysis skills and dichotic listening; to provide speech-in-noise training; and to amplify speech as…

  10. Broadcasting Auditory Weather Reports – A Pilot Project

    OpenAIRE

    Hermann, Thomas; Drees, Jan M.; Ritter, Helge; Brazil, Eoin; Shinn-Cunningham, Barbara

    2003-01-01

    This paper reports on a pilot project between our research department and and a local radio station, investigating the use of sonification to render and present auditory weather forecasts. The sonifications include auditory markers for certain relevant time points, expected weather events like thunder, snow or fog and several auditory streams to summarize the temporal weather changes during the day. To our knowledge, this is the first utilization of sonification in a regular radio program. We...

  11. Processing of pitch and location in human auditory cortex during visual and auditory tasks.

    Science.gov (United States)

    Häkkinen, Suvi; Ovaska, Noora; Rinne, Teemu

    2015-01-01

    The relationship between stimulus-dependent and task-dependent activations in human auditory cortex (AC) during pitch and location processing is not well understood. In the present functional magnetic resonance imaging study, we investigated the processing of task-irrelevant and task-relevant pitch and location during discrimination, n-back, and visual tasks. We tested three hypotheses: (1) According to prevailing auditory models, stimulus-dependent processing of pitch and location should be associated with enhanced activations in distinct areas of the anterior and posterior superior temporal gyrus (STG), respectively. (2) Based on our previous studies, task-dependent activation patterns during discrimination and n-back tasks should be similar when these tasks are performed on sounds varying in pitch or location. (3) Previous studies in humans and animals suggest that pitch and location tasks should enhance activations especially in those areas that also show activation enhancements associated with stimulus-dependent pitch and location processing, respectively. Consistent with our hypotheses, we found stimulus-dependent sensitivity to pitch and location in anterolateral STG and anterior planum temporale (PT), respectively, in line with the view that these features are processed in separate parallel pathways. Further, task-dependent activations during discrimination and n-back tasks were associated with enhanced activations in anterior/posterior STG and posterior STG/inferior parietal lobule (IPL) irrespective of stimulus features. However, direct comparisons between pitch and location tasks performed on identical sounds revealed no significant activation differences. These results suggest that activations during pitch and location tasks are not strongly affected by enhanced stimulus-dependent activations to pitch or location. We also found that activations in PT were strongly modulated by task requirements and that areas in the inferior parietal lobule (IPL) showed

  12. Positive and negative reinforcement activate human auditory cortex

    Directory of Open Access Journals (Sweden)

    Tina eWeis

    2013-12-01

    Full Text Available Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

  13. A computational model of human auditory signal processing and perception.

    Science.gov (United States)

    Jepsen, Morten L; Ewert, Stephan D; Dau, Torsten

    2008-07-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications.

  14. Auditory mismatch impairments are characterized by core neural dysfunctions in schizophrenia.

    Science.gov (United States)

    Gaebler, Arnim Johannes; Mathiak, Klaus; Koten, Jan Willem; König, Andrea Anna; Koush, Yury; Weyer, David; Depner, Conny; Matentzoglu, Simeon; Edgar, James Christopher; Willmes, Klaus; Zvyagintsev, Mikhail

    2015-05-01

    data performed similarly or worse for up to about 10 features. However, connectivity data yielded a better performance when including more than 10 features yielding up to 90% accuracy. Among others, the most discriminating features represented functional connections between the auditory cortex and the anterior cingulate cortex as well as adjacent prefrontal areas. Auditory mismatch impairments incorporate major neural dysfunctions in schizophrenia. Our data suggest synergistic effects of sensory processing deficits, aberrant salience attribution, prefrontal hypoactivation as well as a disrupted connectivity between temporal and prefrontal cortices. These deficits are associated with subsequent disturbances in modality-specific resource allocation. Capturing different schizophrenic core dysfunctions, functional magnetic resonance imaging during this optimized mismatch paradigm reveals processing impairments on the individual patient level, rendering it a potential biomarker of schizophrenia. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  16. Shared and Divergent Auditory and Tactile Processing in Children with Autism and Children with Sensory Processing Dysfunction Relative to Typically Developing Peers.

    Science.gov (United States)

    Demopoulos, Carly; Brandes-Aitken, Annie N; Desai, Shivani S; Hill, Susanna S; Antovich, Ashley D; Harris, Julia; Marco, Elysa J

    2015-07-01

    The aim of this study was to compare sensory processing in typically developing children (TDC), children with Autism Spectrum Disorder (ASD), and those with sensory processing dysfunction (SPD) in the absence of an ASD. Performance-based measures of auditory and tactile processing were compared between male children ages 8-12 years assigned to an ASD (N=20), SPD (N=15), or TDC group (N=19). Both the SPD and ASD groups were impaired relative to the TDC group on a performance-based measure of tactile processing (right-handed graphesthesia). In contrast, only the ASD group showed significant impairment on an auditory processing index assessing dichotic listening, temporal patterning, and auditory discrimination. Furthermore, this impaired auditory processing was associated with parent-rated communication skills for both the ASD group and the combined study sample. No significant group differences were detected on measures of left-handed graphesthesia, tactile sensitivity, or form discrimination; however, more participants in the SPD group demonstrated a higher tactile detection threshold (60%) compared to the TDC (26.7%) and ASD groups (35%). This study provides support for use of performance-based measures in the assessment of children with ASD and SPD and highlights the need to better understand how sensory processing affects the higher order cognitive abilities associated with ASD, such as verbal and non-verbal communication, regardless of diagnostic classification.

  17. Asymmetric transfer of auditory perceptual learning

    Directory of Open Access Journals (Sweden)

    Sygal eAmitay

    2012-11-01

    Full Text Available Perceptual skills can improve dramatically even with minimal practice. A major and practical benefit of learning, however, is in transferring the improvement on the trained task to untrained tasks or stimuli, yet the mechanisms underlying this process are still poorly understood. Reduction of internal noise has been proposed as a mechanism of perceptual learning, and while we have evidence that frequency discrimination (FD learning is due to a reduction of internal noise, the source of that noise was not determined. In this study, we examined whether reducing the noise associated with neural phase locking to tones can explain the observed improvement in behavioural thresholds. We compared FD training between two tone durations (15 and 100 ms that straddled the temporal integration window of auditory nerve fibers upon which computational modeling of phase locking noise was based. Training on short tones resulted in improved FD on probe tests of both the long and short tones. Training on long tones resulted in improvement only on the long tones. Simulations of FD learning, based on the computational model and on signal detection theory, were compared with the behavioral FD data. We found that improved fidelity of phase locking accurately predicted transfer of learning from short to long tones, but also predicted transfer from long to short tones. The observed lack of transfer from long to short tones suggests the involvement of a second mechanism. Training may have increased the temporal integration window which could not transfer because integration time for the short tone is limited by its duration. Current learning models assume complex relationships between neural populations that represent the trained stimuli. In contrast, we propose that training-induced enhancement of the signal-to-noise ratio offers a parsimonious explanation of learning and transfer that easily accounts for asymmetric transfer of learning.

  18. Assessment of anodal and cathodal transcranial direct current stimulation (tDCS) on MMN-indexed auditory sensory processing.

    Science.gov (United States)

    Impey, Danielle; de la Salle, Sara; Knott, Verner

    2016-06-01

    Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a very weak constant current to temporarily excite (anodal stimulation) or inhibit (cathodal stimulation) activity in the brain area of interest via small electrodes placed on the scalp. Currently, tDCS of the frontal cortex is being used as a tool to investigate cognition in healthy controls and to improve symptoms in neurological and psychiatric patients. tDCS has been found to facilitate cognitive performance on measures of attention, memory, and frontal-executive functions. Recently, a short session of anodal tDCS over the temporal lobe has been shown to increase auditory sensory processing as indexed by the Mismatch Negativity (MMN) event-related potential (ERP). This preliminary pilot study examined the separate and interacting effects of both anodal and cathodal tDCS on MMN-indexed auditory pitch discrimination. In a randomized, double blind design, the MMN was assessed before (baseline) and after tDCS (2mA, 20min) in 2 separate sessions, one involving 'sham' stimulation (the device is turned off), followed by anodal stimulation (to temporarily excite cortical activity locally), and one involving cathodal stimulation (to temporarily decrease cortical activity locally), followed by anodal stimulation. Results demonstrated that anodal tDCS over the temporal cortex increased MMN-indexed auditory detection of pitch deviance, and while cathodal tDCS decreased auditory discrimination in baseline-stratified groups, subsequent anodal stimulation did not significantly alter MMN amplitudes. These findings strengthen the position that tDCS effects on cognition extend to the neural processing of sensory input and raise the possibility that this neuromodulatory technique may be useful for investigating sensory processing deficits in clinical populations. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The Effect of Short-Term Auditory Training on Speech in Noise Perception and Cortical Auditory Evoked Potentials in Adults with Cochlear Implants

    Science.gov (United States)

    Barlow, Nathan; Purdy, Suzanne C.; Sharma, Mridula; Giles, Ellen; Narne, Vijay

    2016-01-01

    This study investigated whether a short intensive psychophysical auditory training program is associated with speech perception benefits and changes in cortical auditory evoked potentials (CAEPs) in adult cochlear implant (CI) users. Ten adult implant recipients trained approximately 7 hours on psychophysical tasks (Gap-in-Noise Detection, Frequency Discrimination, Spectral Rippled Noise [SRN], Iterated Rippled Noise, Temporal Modulation). Speech performance was assessed before and after training using Lexical Neighborhood Test (LNT) words in quiet and in eight-speaker babble. CAEPs evoked by a natural speech stimulus /baba/ with varying syllable stress were assessed pre- and post-training, in quiet and in noise. SRN psychophysical thresholds showed a significant improvement (78% on average) over the training period, but performance on other psychophysical tasks did not change. LNT scores in noise improved significantly post-training by 11% on average compared with three pretraining baseline measures. N1P2 amplitude changed post-training for /baba/ in quiet (p = 0.005, visit 3 pretraining versus visit 4 post-training). CAEP changes did not correlate with behavioral measures. CI recipients' clinical records indicated a plateau in speech perception performance prior to participation in the study. A short period of intensive psychophysical training produced small but significant gains in speech perception in noise and spectral discrimination ability. There remain questions about the most appropriate type of training and the duration or dosage of training that provides the most robust outcomes for adults with CIs. PMID:27587925

  20. Auditory event files: integrating auditory perception and action planning.

    Science.gov (United States)

    Zmigrod, Sharon; Hommel, Bernhard

    2009-02-01

    The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.

  1. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  2. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  3. Spectral and temporal measures in hybrid cochlear implant users: on the mechanism of electroacoustic hearing benefits.

    Science.gov (United States)

    Golub, Justin S; Won, Jong Ho; Drennan, Ward R; Worman, Tina D; Rubinstein, Jay T

    2012-02-01

    Compare auditory performance of Hybrid and standard cochlear implant users with psychoacoustic measures of spectral and temporal sensitivity and correlate with measures of clinical benefit. Cross-sectional study. Tertiary academic medical center. Hybrid cochlear implant users between 12 and 33 months after implantation. Hybrid recipients had preservation of low-frequency hearing. Administration of psychoacoustic, music perception, and speech reception in noise tests. Performance on spectral-ripple discrimination, temporal modulation detection, Schroeder-phase discrimination, Clinical Assessment of Music Perception, and speech reception in steady-state noise tests. Clinical Assessment of Music Perception pitch performance at 262 Hz was significantly better in Hybrid users compared with standard implant controls. There was a near significant difference on speech reception in steady-state noise. Surprisingly, neither Schroeder-phase discrimination at 2 frequencies nor temporal modulation detection thresholds across a range of frequencies revealed any advantage in Hybrid users. This contrasts with spectral-ripple measures that were significantly better in the Hybrid group. The spectral-ripple advantage was preserved even when using only residual hearing. These preliminary data confirm existing data demonstrating that residual low-frequency acoustic hearing is advantageous for pitch perception. Results also suggest that clinical benefits enjoyed by Hybrid recipients are due to improved spectral discrimination provided by the residual hearing. No evidence indicated that residual hearing provided temporal information beyond that provided by electric stimulation.

  4. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    Science.gov (United States)

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  5. Discrimination task reveals differences in neural bases of tinnitus and hearing impairment.

    Directory of Open Access Journals (Sweden)

    Fatima T Husain

    Full Text Available We investigated auditory perception and cognitive processing in individuals with chronic tinnitus or hearing loss using functional magnetic resonance imaging (fMRI. Our participants belonged to one of three groups: bilateral hearing loss and tinnitus (TIN, bilateral hearing loss without tinnitus (HL, and normal hearing without tinnitus (NH. We employed pure tones and frequency-modulated sweeps as stimuli in two tasks: passive listening and active discrimination. All subjects had normal hearing through 2 kHz and all stimuli were low-pass filtered at 2 kHz so that all participants could hear them equally well. Performance was similar among all three groups for the discrimination task. In all participants, a distributed set of brain regions including the primary and non-primary auditory cortices showed greater response for both tasks compared to rest. Comparing the groups directly, we found decreased activation in the parietal and frontal lobes in the participants with tinnitus compared to the HL group and decreased response in the frontal lobes relative to the NH group. Additionally, the HL subjects exhibited increased response in the anterior cingulate relative to the NH group. Our results suggest that a differential engagement of a putative auditory attention and short-term memory network, comprising regions in the frontal, parietal and temporal cortices and the anterior cingulate, may represent a key difference in the neural bases of chronic tinnitus accompanied by hearing loss relative to hearing loss alone.

  6. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  8. Reality of auditory verbal hallucinations

    Science.gov (United States)

    Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-01-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency. PMID:19620178

  9. Binaural auditory processing in multiple sclerosis subjects.

    Science.gov (United States)

    Levine, R A; Gardner, J C; Stufflebeam, S M; Fullerton, B C; Carlisle, E W; Furst, M; Rosen, B R; Kiang, N Y

    1993-06-01

    In order to relate human auditory processing to physiological and anatomical experimental animal data, we have examined the interrelationships between behavioral, electrophysiological and anatomical data obtained from human subjects with focal brainstem lesions. Thirty-eight subjects with multiple sclerosis were studied with tests of interaural time and level discrimination (just noticeable differences or jnds), brainstem auditory evoked potentials and magnetic resonance (MR) imaging. Interaural testing used two types of stimuli, high-pass (> 4000 Hz) and low-pass (< 1000 Hz) noise bursts. Abnormal time jnds (Tjnd) were far more common than abnormal level jnds (70% vs 11%); especially for the high-pass (Hp) noise (70% abnormal vs 40% abnormal for low-pass (Lp) noise). The HpTjnd could be abnormal with no other abnormalities; however, whenever the BAEPs, LpTjnd and/or level jnds were abnormal HpTjnd was always abnormal. Abnormal wave III amplitude was associated with abnormalities in both time jnds, but abnormal wave III latency with only abnormal HpTjnds. Abnormal wave V amplitude, when unilateral, was associated with a major HpTjnd abnormality, and, when bilateral, with both HpTjnd and LpTjnd major abnormalities. Sixteen of the subjects had their MR scans obtained with a uniform protocol and could be analyzed with objective criteria. In all four subjects with lesions involving the pontine auditory pathway, the BAEPs and both time jnds were abnormal. Of the twelve subjects with no lesions involving the pontine auditory pathway, all had normal BAEPs and level jnds, ten had normal LpTjnds, but only five had normal HpTjnds. We conclude that interaural time discrimination is closely related to the BAEPs and is dependent upon the stimulus spectrum. Redundant encoding of low-frequency sounds in the discharge patterns of auditory neurons, may explain why the HpTjnd is a better indicator of neural desynchrony than the LpTjnd. Encroachment of MS lesions upon the pontine

  10. An interactive model of auditory-motor speech perception.

    Science.gov (United States)

    Liebenthal, Einat; Möttönen, Riikka

    2017-12-18

    Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Neurogenetics and auditory processing in developmental dyslexia.

    Science.gov (United States)

    Giraud, Anne-Lise; Ramus, Franck

    2013-02-01

    Dyslexia is a polygenic developmental reading disorder characterized by an auditory/phonological deficit. Based on the latest genetic and neurophysiological studies, we propose a tentative model in which phonological deficits could arise from genetic anomalies of the cortical micro-architecture in the temporal lobe. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Auditory and visual scene analysis : an overview

    NARCIS (Netherlands)

    Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in

  13. Processing of communication calls in Guinea pig auditory cortex.

    Science.gov (United States)

    Grimsley, Jasmine M S; Shanbhag, Sharad J; Palmer, Alan R; Wallace, Mark N

    2012-01-01

    Vocal communication is an important aspect of guinea pig behaviour and a large contributor to their acoustic environment. We postulated that some cortical areas have distinctive roles in processing conspecific calls. In order to test this hypothesis we presented exemplars from all ten of their main adult vocalizations to urethane anesthetised animals while recording from each of the eight areas of the auditory cortex. We demonstrate that the primary area (AI) and three adjacent auditory belt areas contain many units that give isomorphic responses to vocalizations. These are the ventrorostral belt (VRB), the transitional belt area (T) that is ventral to AI and the small area (area S) that is rostral to AI. Area VRB has a denser representation of cells that are better at discriminating among calls by using either a rate code or a temporal code than any other area. Furthermore, 10% of VRB cells responded to communication calls but did not respond to stimuli such as clicks, broadband noise or pure tones. Area S has a sparse distribution of call responsive cells that showed excellent temporal locking, 31% of which selectively responded to a single call. AI responded well to all vocalizations and was much more responsive to vocalizations than the adjacent dorsocaudal core area. Areas VRB, AI and S contained units with the highest levels of mutual information about call stimuli. Area T also responded well to some calls but seems to be specialized for low sound levels. The two dorsal belt areas are comparatively unresponsive to vocalizations and contain little information about the calls. AI projects to areas S, VRB and T, so there may be both rostral and ventral pathways for processing vocalizations in the guinea pig.

  14. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  15. Video game players show more precise multisensory temporal processing abilities.

    Science.gov (United States)

    Donohue, Sarah E; Woldorff, Marty G; Mitroff, Stephen R

    2010-05-01

    Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. In the present study, we examined whether video game players' benefits generalize beyond vision to multisensory processing by presenting auditory and visual stimuli within a short temporal window to video game players and non-video game players. Participants performed two discrimination tasks, both of which revealed benefits for video game players: In a simultaneity judgment task, video game players were better able to distinguish whether simple visual and auditory stimuli occurred at the same moment or slightly offset in time, and in a temporal-order judgment task, they revealed an enhanced ability to determine the temporal sequence of multisensory stimuli. These results suggest that people with extensive experience playing video games display benefits that extend beyond the visual modality to also impact multisensory processing.

  16. Reduced auditory segmentation potentials in first-episode schizophrenia.

    Science.gov (United States)

    Coffman, Brian A; Haigh, Sarah M; Murphy, Timothy K; Leiter-Mcbeth, Justin; Salisbury, Dean F

    2017-10-22

    Auditory scene analysis (ASA) dysfunction is likely an important component of the symptomatology of schizophrenia. Auditory object segmentation, the grouping of sequential acoustic elements into temporally-distinct auditory objects, can be assessed with electroencephalography through measurement of the auditory segmentation potential (ASP). Further, N2 responses to the initial and final elements of auditory objects are enhanced relative to medial elements, which may indicate auditory object edge detection (initiation and termination). Both ASP and N2 modulation are impaired in long-term schizophrenia. To determine whether these deficits are present early in disease course, we compared ASP and N2 modulation between individuals at their first episode of psychosis within the schizophrenia spectrum (FE, N=20) and matched healthy controls (N=24). The ASP was reduced by >40% in FE; however, N2 modulation was not statistically different from HC. This suggests that auditory segmentation (ASP) deficits exist at this early stage of schizophrenia, but auditory edge detection (N2 modulation) is relatively intact. In a subset of subjects for whom structural MRIs were available (N=14 per group), ASP sources were localized to midcingulate cortex (MCC) and temporal auditory cortex. Neurophysiological activity in FE was reduced in MCC, an area linked to aberrant perceptual organization, negative symptoms, and cognitive dysfunction in schizophrenia, but not temporal auditory cortex. This study supports the validity of the ASP for measurement of auditory object segmentation and suggests that the ASP may be useful as an early index of schizophrenia-related MCC dysfunction. Further, ASP deficits may serve as a viable biomarker of disease presence. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Predictors of auditory performance in hearing-aid users: The role of cognitive function and auditory lifestyle (A)

    DEFF Research Database (Denmark)

    Vestergaard, Martin David

    2006-01-01

    no objective benefit can be measured. It has been suggested that lack of agreement between various hearing-aid outcome components can be explained by individual differences in cognitive function and auditory lifestyle. We measured speech identification, self-report outcome, spectral and temporal resolution...... of hearing, cognitive skills, and auditory lifestyle in 25 new hearing-aid users. The purpose was to assess the predictive power of the nonauditory measures while looking at the relationships between measures from various auditory-performance domains. The results showed that only moderate correlation exists...... between objective and subjective hearing-aid outcome. Different self-report outcome measures showed a different amount of correlation with objective auditory performance. Cognitive skills were found to play a role in explaining speech performance and spectral and temporal abilities, and auditory lifestyle...

  18. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  19. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  20. Price Discrimination

    OpenAIRE

    Armstrong, M.

    2008-01-01

    This paper surveys recent economic research on price discrimination, both in monopoly and oligopoly markets. Topics include static and dynamic forms of price discrimination, and both final and input markets are considered. Potential antitrust aspects of price discrimination are highlighted throughout the paper. The paper argues that the informational requirements to make accurate policy are very great, and with most forms of price discrimination a laissez-faire policy may be the best availabl...

  1. Auditory agnosia as a clinical symptom of childhood adrenoleukodystrophy.

    Science.gov (United States)

    Furushima, Wakana; Kaga, Makiko; Nakamura, Masako; Gunji, Atsuko; Inagaki, Masumi

    2015-08-01

    To investigate detailed auditory features in patients with auditory impairment as the first clinical symptoms of childhood adrenoleukodystrophy (CSALD). Three patients who had hearing difficulty as the first clinical signs and/or symptoms of ALD. Precise examination of the clinical characteristics of hearing and auditory function was performed, including assessments of pure tone audiometry, verbal sound discrimination, otoacoustic emission (OAE), and auditory brainstem response (ABR), as well as an environmental sound discrimination test, a sound lateralization test, and a dichotic listening test (DLT). The auditory pathway was evaluated by MRI in each patient. Poor response to calling was detected in all patients. Two patients were not aware of their hearing difficulty, and had been diagnosed with normal hearing by otolaryngologists at first. Pure-tone audiometry disclosed normal hearing in all patients. All patients showed a normal wave V ABR threshold. Three patients showed obvious difficulty in discriminating verbal sounds, environmental sounds, and sound lateralization and strong left-ear suppression in a dichotic listening test. However, once they discriminated verbal sounds, they correctly understood the meaning. Two patients showed elongation of the I-V and III-V interwave intervals in ABR, but one showed no abnormality. MRIs of these three patients revealed signal changes in auditory radiation including in other subcortical areas. The hearing features of these subjects were diagnosed as auditory agnosia and not aphasia. It should be emphasized that when patients are suspected to have hearing impairment but have no abnormalities in pure tone audiometry and/or ABR, this should not be diagnosed immediately as psychogenic response or pathomimesis, but auditory agnosia must also be considered. Copyright © 2014 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  2. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  3. Auditory dysfunction in patients with Huntington's disease.

    Science.gov (United States)

    Profant, Oliver; Roth, Jan; Bureš, Zbyněk; Balogová, Zuzana; Lišková, Irena; Betka, Jan; Syka, Josef

    2017-10-01

    Huntington's disease (HD) is an autosomal, dominantly inherited, neurodegenerative disease. The main clinical features are motor impairment, progressive cognitive deterioration and behavioral changes. The aim of our study was to find out whether patients with HD suffer from disorders of the auditory system. A group of 17 genetically verified patients (11 males, 6 females) with various stages of HD (examined by UHDRS - motor part and total functional capacity, MMSE for cognitive functions) underwent an audiological examination (high frequency pure tone audiometry, otoacoustic emissions, speech audiometry, speech audiometry in babble noise, auditory brainstem responses). Additionally, 5 patients underwent a more extensive audiological examination, focused on central auditory processing. The results were compared with a group of age-matched healthy volunteers. Our results show that HD patients have physiologic hearing thresholds, otoacoustic emissions and auditory brainstem responses; however, they display a significant decrease in speech understanding, especially under demanding conditions (speech in noise) compared to age-matched controls. Additional auditory tests also show deficits in sound source localization, based on temporal and intensity cues. We also observed a statistically significant correlation between the perception of speech in noise, and motoric and cognitive functions. However, a correlation between genetic predisposition (number of triplets) and function of inner ear was not found. We conclude that HD negatively influences the function of the central part of the auditory system at cortical and subcortical levels, altering predominantly speech processing and sound source lateralization. We have thoroughly characterized auditory pathology in patients with HD that suggests involvement of central auditory and cognitive areas. Copyright © 2017. Published by Elsevier B.V.

  4. Multiple sclerosis lesions of the auditory pons are not silent.

    Science.gov (United States)

    Levine, R A; Gardner, J C; Fullerton, B C; Stufflebeam, S M; Furst, M; Rosen, B R

    1994-10-01

    To understand the relationship between brainstem lesions and auditory neurology in patients with multiple sclerosis, we compared behavioural, electrophysiological and imaging data in 38 patients with probable or definite multiple sclerosis and normal or near normal hearing. Behavioural measures included (i) general hearing tests (audiogram, speech discrimination) and (ii) hearing tests likely to be critically dependent upon brainstem processing (masking level difference, interaural time and level discrimination). Brainstem auditory evoked potentials provided the electrophysiological data. Multiplanar high-resolution MRI of the brainstem provided the anatomical data. Interaural time discrimination for high-frequency sounds was by far the most sensitive of all tests with abnormalities in 71% of all subjects. Whenever any other test was abnormal this test was always abnormal. Interaural time discrimination for low-frequency sounds and evoked potentials were closely related and next most sensitive with abnormalities in approximately 40% of all subjects. Interaural level discrimination and masking level difference were least sensitive with abnormalities in < 10% of subjects. Speech discrimination scores correlated significantly with the masking level differences, as well as with interaural time discrimination for high-frequency sounds. Pontine lesions were found in five of the 16 patients, in whom an objective method for detecting magnetic resonance lesions could be applied. All four with lesions involving the pontine auditory pathway had marked abnormalities in interaural time discrimination and evoked potentials. None of the other 12 had evoked potentials abnormalities. We conclude that neurological tests requiring precise neural timing can reveal behavioural deficits for multiple sclerosis lesions of the auditory pons that are otherwise 'silent'. Of all neurological systems the auditory system at the level of the pons is probably the most sensitive to multiple

  5. Auditory and cognitive performance in elderly musicians and nonmusicians.

    Science.gov (United States)

    Grassi, Massimo; Meneghetti, Chiara; Toffalini, Enrico; Borella, Erika

    2017-01-01

    Musicians represent a model for examining brain and behavioral plasticity in terms of cognitive and auditory profile, but few studies have investigated whether elderly musicians have better auditory and cognitive abilities than nonmusicians. The aim of the present study was to examine whether being a professional musician attenuates the normal age-related changes in hearing and cognition. Elderly musicians still active in their profession were compared with nonmusicians on auditory performance (absolute threshold, frequency intensity, duration and spectral shape discrimination, gap and sinusoidal amplitude-modulation detection), and on simple (short-term memory) and more complex and higher-order (working memory [WM] and visuospatial abilities) cognitive tasks. The sample consisted of adults at least 65 years of age. The results showed that older musicians had similar absolute thresholds but better supra-threshold discrimination abilities than nonmusicians in four of the six auditory tasks administered. They also had a better WM performance, and stronger visuospatial abilities than nonmusicians. No differences were found between the two groups' short-term memory. Frequency discrimination and gap detection for the auditory measures, and WM complex span tasks and one of the visuospatial tasks for the cognitive ones proved to be very good classifiers of the musicians. These findings suggest that life-long music training may be associated with enhanced auditory and cognitive performance, including complex cognitive skills, in advanced age. However, whether this music training represents a protective factor or not needs further investigation.

  6. Auditory and cognitive performance in elderly musicians and nonmusicians.

    Directory of Open Access Journals (Sweden)

    Massimo Grassi

    Full Text Available Musicians represent a model for examining brain and behavioral plasticity in terms of cognitive and auditory profile, but few studies have investigated whether elderly musicians have better auditory and cognitive abilities than nonmusicians. The aim of the present study was to examine whether being a professional musician attenuates the normal age-related changes in hearing and cognition. Elderly musicians still active in their profession were compared with nonmusicians on auditory performance (absolute threshold, frequency intensity, duration and spectral shape discrimination, gap and sinusoidal amplitude-modulation detection, and on simple (short-term memory and more complex and higher-order (working memory [WM] and visuospatial abilities cognitive tasks. The sample consisted of adults at least 65 years of age. The results showed that older musicians had similar absolute thresholds but better supra-threshold discrimination abilities than nonmusicians in four of the six auditory tasks administered. They also had a better WM performance, and stronger visuospatial abilities than nonmusicians. No differences were found between the two groups' short-term memory. Frequency discrimination and gap detection for the auditory measures, and WM complex span tasks and one of the visuospatial tasks for the cognitive ones proved to be very good classifiers of the musicians. These findings suggest that life-long music training may be associated with enhanced auditory and cognitive performance, including complex cognitive skills, in advanced age. However, whether this music training represents a protective factor or not needs further investigation.

  7. Differential discriminator

    International Nuclear Information System (INIS)

    Dukhanov, V.I.; Mazurov, I.B.

    1981-01-01

    A principal flowsheet of a differential discriminator intended for operation in a spectrometric circuit with statistical time distribution of pulses is described. The differential discriminator includes four integrated discriminators and a channel of piled-up signal rejection. The presence of the rejection channel enables the discriminator to operate effectively at loads of 14x10 3 pulse/s. The temperature instability of the discrimination thresholds equals 250 μV/ 0 C. The discrimination level changes within 0.1-5 V, the level shift constitutes 0.5% for the filling ratio of 1:10. The rejection coefficient is not less than 90%. Alpha spectrum of the 228 Th source is presented to evaluate the discriminator operation with the rejector. The rejector provides 50 ns time resolution

  8. Parvalbumin immunoreactivity in the auditory cortex of a mouse model of presbycusis.

    Science.gov (United States)

    Martin del Campo, H N; Measor, K R; Razak, K A

    2012-12-01

    Age-related hearing loss (presbycusis) affects ∼35% of humans older than sixty-five years. Symptoms of presbycusis include impaired discrimination of sounds with fast temporal features, such as those present in speech. Such symptoms likely arise because of central auditory system plasticity, but the underlying components are incompletely characterized. The rapid spiking inhibitory interneurons that co-express the calcium binding protein Parvalbumin (PV) are involved in shaping neural responses to fast spectrotemporal modulations. Here, we examined cortical PV expression in the C57bl/6 (C57) mouse, a strain commonly studied as a presbycusis model. We examined if PV expression showed auditory cortical field- and layer-specific susceptibilities with age. The percentage of PV-expressing cells relative to Nissl-stained cells was counted in the anterior auditory field (AAF) and primary auditory cortex (A1) in three age groups: young (1-2 months), middle-aged (6-8 months) and old (14-20 months). There were significant declines in the percentage of cells expressing PV at a detectable level in layers I-IV of both A1 and AAF in the old mice compared to young mice. In layers V-VI, there was an increase in the percentage of PV-expressing cells in the AAF of the old group. There were no changes in percentage of PV-expressing cells in layers V-VI of A1. These data suggest cortical layer(s)- and field-specific susceptibility of PV+ cells with presbycusis. The results are consistent with the hypothesis that a decline in inhibitory neurotransmission, particularly in the superficial cortical layers, occurs with presbycusis. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Pediatric extratemporal epilepsy presenting with a complex auditory aura.

    Science.gov (United States)

    Clarke, Dave F; Boop, Frederick A; McGregor, Amy L; Perkins, F Frederick; Brewer, Vickie R; Wheless, James W

    2008-06-01

    Ear plugging (placing fingers in or covering the ears) is a clinical seizure semiology that has been described as a response to an unformed, auditory hallucination localized to the superior temporal neocortex. The localizing value of ear plugging in more complex auditory hallucinations may have more involved circuitry. We report on one child, whose aura was a more complex auditory phenomenon, consisting of a door opening and closing, getting louder as the ictus persisted. This child presented, at four years of age, with brief episodes of ear plugging followed by an acute emotional change that persisted until surgical resection of a left mesial frontal lesion at 11 years of age. Scalp video-EEG, magnetic resource imaging, magnetoencephalography, and invasive video-EEG monitoring were carried out. The scalp EEG changes always started after clinical onset. These were not localizing, and encompassed a wide field over the bi-frontal head regions, the left side predominant over the right. Intracranial video-EEG monitoring with subdural electrodes over both frontal and temporal regions localized the seizure-onset to the left mesial frontal lesion. The patient has remained seizure-free since the resection on June 28, 2006, approximately one and a half years ago. Ear plugging in response to simple auditory auras localize to the superior temporal gyrus. If the patient has more complex, formed auditory auras, not only may the secondary auditory areas in the temporal lobe be involved, but one has to entertain the possibility of ictal-onset from the frontal cortex.

  10. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  11. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  13. Speech training alters consonant and vowel responses in multiple auditory cortex fields.

    Science.gov (United States)

    Engineer, Crystal T; Rahebi, Kimiya C; Buell, Elizabeth P; Fink, Melyssa K; Kilgard, Michael P

    2015-01-01

    Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Auditory Processing, Linguistic Prosody Awareness, and Word Reading in Mandarin-Speaking Children Learning English

    Science.gov (United States)

    Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.

    2017-01-01

    This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…

  15. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users

    Science.gov (United States)

    Zheng, Yi; Escabí, Monty; Litovsky, Ruth Y.

    2018-01-01

    Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband “ripple” stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects’ spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance. PMID:28601530

  16. The role of auditory abilities in basic mechanisms of cognition in older adults

    Directory of Open Access Journals (Sweden)

    Massimo eGrassi

    2013-10-01

    Full Text Available The aim of this study was to assess age-related differences between young and older adults in auditory abilities and to investigate the relationship between auditory abilities and basic mechanisms of cognition in older adults. Although there is a certain consensus that the participant’s sensitivity to the absolute intensity of sounds (such as that measured via pure tone audiometry explains his/her cognitive performance, there is not yet much evidence that the participant’s auditory ability (i.e., the whole supra-threshold processing of sounds explains his/her cognitive performance. Twenty-eight young adults (age < 35, 26 young-old adults (65 ≤ age ≤75 and 28 old-old adults (age > 75 were presented with a set of tasks estimating several auditory abilities (i.e., frequency discrimination, intensity discrimination, duration discrimination, timbre discrimination, gap detection, amplitude modulation detection, and the absolute threshold for a 1 kHz pure tone and the participant’s working memory, cognitive inhibition, and processing speed. Results showed an age-related decline in both auditory and cognitive performance. Moreover, regression analyses showed that a subset of the auditory abilities (i.e., the ability to discriminate frequency, duration, timbre, and the ability to detect amplitude modulation explained a significant part of the variance observed in processing speed in older adults. Overall, the present results highlight the relationship between auditory abilities and basic mechanisms of cognition.

  17. Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.

    Science.gov (United States)

    Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne

    2016-12-01

    It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The Role of Visual and Auditory Stimuli in Continuous Performance Tests: Differential Effects on Children With ADHD.

    Science.gov (United States)

    Simões, Eunice N; Carvalho, Ana L Novais; Schmidt, Sergio L

    2018-04-01

    Continuous performance tests (CPTs) usually utilize visual stimuli. A previous investigation showed that inattention is partially independent of modality, but response inhibition is modality-specific. Here we aimed to compare performance on visual and auditory CPTs in ADHD and in healthy controls. The sample consisted of 160 elementary and high school students (43 ADHD, 117 controls). For each sensory modality, five variables were extracted: commission errors (CEs) and omission errors (OEs), reaction time (RT), variability of reaction time (VRT), and coefficient of variability (CofV = VRT / RT). The ADHD group exhibited higher rates for all test variables. The discriminant analysis indicated that auditory OE was the most reliable variable for discriminating between groups, followed by visual CE, auditory CE, and auditory CofV. Discriminant equation classified ADHD with 76.3% accuracy. Auditory parameters in the inattention domain (OE and VRT) can discriminate ADHD from controls. For the hyperactive/impulsive domain (CE), the two modalities are equally important.

  19. Recovery characteristics of the electrically stimulated auditory nerve in deafened guinea pigs: relation to neuronal status.

    Science.gov (United States)

    Ramekers, Dyan; Versnel, Huib; Strahl, Stefan B; Klis, Sjaak F L; Grolman, Wilko

    2015-03-01

    Successful cochlear implant performance requires adequate responsiveness of the auditory nerve to prolonged pulsatile electrical stimulation. Degeneration of the auditory nerve as a result of severe hair cell loss could considerably compromise this ability. The main objective of this study was to characterize the recovery of the electrically stimulated auditory nerve, as well as to evaluate possible changes caused by deafness-induced degeneration. To this end we studied temporal responsiveness of the auditory nerve in a guinea pig model of sensorineural hearing loss. Using masker-probe and pulse train paradigms we compared electrically evoked compound action potentials (eCAPs) in normal-hearing animals with those in animals with moderate (two weeks after ototoxic treatment) and severe (six weeks after ototoxic treatment) loss of spiral ganglion cells (SGCs). Masker-probe interval and pulse train inter-pulse interval was varied from 0.3 to 16 ms. Whereas recovery assessed with masker-probe was roughly similar for normal-hearing and both groups of deafened animals, it was considerably faster for six weeks deaf animals (τ ≈ 1.2 ms) than for two weeks deaf or normal-hearing animals (τ ≈ 3-4 ms) when 100-ms pulse trains were applied. Latency increased with decreasing inter-pulse intervals, and this was more pronounced with pulse trains than with masker-probe stimulation. With high frequency pulse train stimulation eCAP amplitudes were modulated for deafened animals, meaning that amplitudes for odd pulse numbers were larger than for even pulses. The relative refractory period (τ) and the modulation depth of the eCAP amplitude for pulse trains, as well as the latency increase for both paradigms significantly correlated with quantified measures of auditory nerve degeneration (size and packing density of SGCs). In addition to these findings, separate masker-probe recovery functions for the eCAP N1 and N2 peaks displayed a robust non-monotonic or shoulder

  20. Nature of auditory processing disorder in children.

    Science.gov (United States)

    Moore, David R; Ferguson, Melanie A; Edmondson-Jones, A Mark; Ratib, Sonia; Riley, Alison

    2010-08-01

    We tested the specific hypothesis that the presentation of auditory processing disorder (APD) is related to a sensory processing deficit. Randomly chosen, 6- to 11-year-old children with normal hearing (N = 1469) were tested in schools in 4 regional centers across the United Kingdom. Caregivers completed questionnaires regarding their participating children's listening and communication skills. Children completed a battery of audiometric, auditory processing (AP), speech-in-noise, cognitive (IQ, memory, language, and literacy), and attention (auditory and visual) tests. AP measures separated the sensory and nonsensory contributions to spectral and temporal perception. AP improved with age. Poor-for-age AP was significantly related to poor cognitive, communication, and speech-in-noise performance (P auditory perception and cognitive scores were generally low (r = 0.1-0.3). Multivariate regression analysis showed that response variability in the AP tests, reflecting attention, and cognitive scores were the best predictors of listening, communication, and speech-in-noise skills. Presenting symptoms of APD were largely unrelated to auditory sensory processing. Response variability and cognitive performance were the best predictors of poor communication and listening. We suggest that APD is primarily an attention problem and that clinical diagnosis and management, as well as further research, should be based on that premise.

  1. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  2. Increased BOLD Signals Elicited by High Gamma Auditory Stimulation of the Left Auditory Cortex in Acute State Schizophrenia

    Directory of Open Access Journals (Sweden)

    Hironori Kuga, M.D.

    2016-10-01

    We acquired BOLD responses elicited by click trains of 20, 30, 40 and 80-Hz frequencies from 15 patients with acute episode schizophrenia (AESZ, 14 symptom-severity-matched patients with non-acute episode schizophrenia (NASZ, and 24 healthy controls (HC, assessed via a standard general linear-model-based analysis. The AESZ group showed significantly increased ASSR-BOLD signals to 80-Hz stimuli in the left auditory cortex compared with the HC and NASZ groups. In addition, enhanced 80-Hz ASSR-BOLD signals were associated with more severe auditory hallucination experiences in AESZ participants. The present results indicate that neural over activation occurs during 80-Hz auditory stimulation of the left auditory cortex in individuals with acute state schizophrenia. Given the possible association between abnormal gamma activity and increased glutamate levels, our data may reflect glutamate toxicity in the auditory cortex in the acute state of schizophrenia, which might lead to progressive changes in the left transverse temporal gyrus.

  3. Distraction by deviance: comparing the effects of auditory and visual deviant stimuli on auditory and visual target processing.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-01-01

    We report the results of oddball experiments in which an irrelevant stimulus (standard, deviant) was presented before a target stimulus and the modality of these stimuli was manipulated orthogonally (visual/auditory). Experiment 1 showed that auditory deviants yielded distraction irrespective of the target's modality while visual deviants did not impact on performance. When participants were forced to attend the distractors in order to detect a rare target ("target-distractor"), auditory deviants yielded distraction irrespective of the target's modality and visual deviants yielded a small distraction effect when targets were auditory (Experiments 2 & 3). Visual deviants only produced distraction for visual targets when deviant stimuli were not visually distinct from the other distractors (Experiment 4). Our results indicate that while auditory deviants yield distraction irrespective of the targets' modality, visual deviants only do so when attended and under selective conditions, at least when irrelevant and target stimuli are temporally and perceptually decoupled.

  4. Structural Discrimination

    DEFF Research Database (Denmark)

    Thorsen, Mira Skadegård

    In this article, I discuss structural discrimination, an underrepresented area of study in Danish discrimination and intercultural research. It is defined here as discursive and constitutive, and presented as a central element of my analytical approach. This notion is employed in the with which t...

  5. fMRI of the auditory system: understanding the neural basis of auditory gestalt.

    Science.gov (United States)

    Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich

    2003-12-01

    Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.

  6. Presentation of dynamically overlapping auditory messages in user interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Papp, III, Albert Louis [Univ. of California, Davis, CA (United States)

    1997-09-01

    This dissertation describes a methodology and example implementation for the dynamic regulation of temporally overlapping auditory messages in computer-user interfaces. The regulation mechanism exists to schedule numerous overlapping auditory messages in such a way that each individual message remains perceptually distinct from all others. The method is based on the research conducted in the area of auditory scene analysis. While numerous applications have been engineered to present the user with temporally overlapped auditory output, they have generally been designed without any structured method of controlling the perceptual aspects of the sound. The method of scheduling temporally overlapping sounds has been extended to function in an environment where numerous applications can present sound independently of each other. The Centralized Audio Presentation System is a global regulation mechanism that controls all audio output requests made from all currently running applications. The notion of multimodal objects is explored in this system as well. Each audio request that represents a particular message can include numerous auditory representations, such as musical motives and voice. The Presentation System scheduling algorithm selects the best representation according to the current global auditory system state, and presents it to the user within the request constraints of priority and maximum acceptable latency. The perceptual conflicts between temporally overlapping audio messages are examined in depth through the Computational Auditory Scene Synthesizer. At the heart of this system is a heuristic-based auditory scene synthesis scheduling method. Different schedules of overlapped sounds are evaluated and assigned penalty scores. High scores represent presentations that include perceptual conflicts between over-lapping sounds. Low scores indicate fewer and less serious conflicts. A user study was conducted to validate that the perceptual difficulties predicted by

  7. Vocal accuracy and neural plasticity following micromelody-discrimination training.

    Directory of Open Access Journals (Sweden)

    Jean Mary Zarate

    2010-06-01

    Full Text Available Recent behavioral studies report correlational evidence to suggest that non-musicians with good pitch discrimination sing more accurately than those with poorer auditory skills. However, other studies have reported a dissociation between perceptual and vocal production skills. In order to elucidate the relationship between auditory discrimination skills and vocal accuracy, we administered an auditory-discrimination training paradigm to a group of non-musicians to determine whether training-enhanced auditory discrimination would specifically result in improved vocal accuracy.We utilized micromelodies (i.e., melodies with seven different interval scales, each smaller than a semitone as the main stimuli for auditory discrimination training and testing, and we used single-note and melodic singing tasks to assess vocal accuracy in two groups of non-musicians (experimental and control. To determine if any training-induced improvements in vocal accuracy would be accompanied by related modulations in cortical activity during singing, the experimental group of non-musicians also performed the singing tasks while undergoing functional magnetic resonance imaging (fMRI. Following training, the experimental group exhibited significant enhancements in micromelody discrimination compared to controls. However, we did not observe a correlated improvement in vocal accuracy during single-note or melodic singing, nor did we detect any training-induced changes in activity within brain regions associated with singing.Given the observations from our auditory training regimen, we therefore conclude that perceptual discrimination training alone is not sufficient to improve vocal accuracy in non-musicians, supporting the suggested dissociation between auditory perception and vocal production.

  8. Autosomal dominant partial epilepsy with auditory features: Defining the phenotype

    Science.gov (United States)

    Winawer, Melodie R.; Hauser, W. Allen; Pedley, Timothy A.

    2009-01-01

    The authors previously reported linkage to chromosome 10q22-24 for autosomal dominant partial epilepsy with auditory features. This study describes seizure semiology in the original linkage family in further detail. Auditory hallucinations were most common, but other sensory symptoms (visual, olfactory, vertiginous, and cephalic) were also reported. Autonomic, psychic, and motor symptoms were less common. The clinical semiology points to a lateral temporal seizure origin. Auditory hallucinations, the most striking clinical feature, are useful for identifying new families with this synome. PMID:10851389

  9. Reboxetine Improves Auditory Attention and Increases Norepinephrine Levels in the Auditory Cortex of Chronically Stressed Rats.

    Science.gov (United States)

    Pérez-Valenzuela, Catherine; Gárate-Pérez, Macarena F; Sotomayor-Zárate, Ramón; Delano, Paul H; Dagnino-Subiabre, Alexies

    2016-01-01

    Chronic stress impairs auditory attention in rats and monoamines regulate neurotransmission in the primary auditory cortex (A1), a brain area that modulates auditory attention. In this context, we hypothesized that norepinephrine (NE) levels in A1 correlate with the auditory attention performance of chronically stressed rats. The first objective of this research was to evaluate whether chronic stress affects monoamines levels in A1. Male Sprague-Dawley rats were subjected to chronic stress (restraint stress) and monoamines levels were measured by high performance liquid chromatographer (HPLC)-electrochemical detection. Chronically stressed rats had lower levels of NE in A1 than did controls, while chronic stress did not affect serotonin (5-HT) and dopamine (DA) levels. The second aim was to determine the effects of reboxetine (a selective inhibitor of NE reuptake) on auditory attention and NE levels in A1. Rats were trained to discriminate between two tones of different frequencies in a two-alternative choice task (2-ACT), a behavioral paradigm to study auditory attention in rats. Trained animals that reached a performance of ≥80% correct trials in the 2-ACT were randomly assigned to control and stress experimental groups. To analyze the effects of chronic stress on the auditory task, trained rats of both groups were subjected to 50 2-ACT trials 1 day before and 1 day after of the chronic stress period. A difference score (DS) was determined by subtracting the number of correct trials after the chronic stress protocol from those before. An unexpected result was that vehicle-treated control rats and vehicle-treated chronically stressed rats had similar performances in the attentional task, suggesting that repeated injections with vehicle were stressful for control animals and deteriorated their auditory attention. In this regard, both auditory attention and NE levels in A1 were higher in chronically stressed rats treated with reboxetine than in vehicle

  10. The perception of prosody and associated auditory cues in early-implanted children: the role of auditory working memory and musical activities.

    Science.gov (United States)

    Torppa, Ritva; Faulkner, Andrew; Huotilainen, Minna; Järvikivi, Juhani; Lipsanen, Jari; Laasonen, Marja; Vainio, Martti

    2014-03-01

    To study prosodic perception in early-implanted children in relation to auditory discrimination, auditory working memory, and exposure to music. Word and sentence stress perception, discrimination of fundamental frequency (F0), intensity and duration, and forward digit span were measured twice over approximately 16 months. Musical activities were assessed by questionnaire. Twenty-one early-implanted and age-matched normal-hearing (NH) children (4-13 years). Children with cochlear implants (CIs) exposed to music performed better than others in stress perception and F0 discrimination. Only this subgroup of implanted children improved with age in word stress perception, intensity discrimination, and improved over time in digit span. Prosodic perception, F0 discrimination and forward digit span in implanted children exposed to music was equivalent to the NH group, but other implanted children performed more poorly. For children with CIs, word stress perception was linked to digit span and intensity discrimination: sentence stress perception was additionally linked to F0 discrimination. Prosodic perception in children with CIs is linked to auditory working memory and aspects of auditory discrimination. Engagement in music was linked to better performance across a range of measures, suggesting that music is a valuable tool in the rehabilitation of implanted children.

  11. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  12. Spatial discrimination and visual discrimination

    DEFF Research Database (Denmark)

    Haagensen, Annika M. J.; Grand, Nanna; Klastrup, Signe

    2013-01-01

    in a visual discrimination test. The juvenile minipigs were able to learn the spatial hole-board discrimination test and showed improved working and reference memory during the learning phase. Performance in the memory phases was affected by the retention intervals, but the minipigs were able to remember...... the concept of the test in both memory phases. Working memory and reference memory were significantly improved in the last trials of the memory phases. In the visual discrimination test, the minipigs learned to discriminate between the three figures presented to them within 9-14 sessions. For the memory test......Two methods investigating learning and memory in juvenile Gottingen minipigs were evaluated for potential use in preclinical toxicity testing. Twelve minipigs were tested using a spatial hole-board discrimination test including a learning phase and two memory phases. Five minipigs were tested...

  13. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  14. Automatic detection of frequency changes depends on auditory stimulus intensity.

    Science.gov (United States)

    Salo, S; Lang, A H; Aaltonen, O; Lertola, K; Kärki, T

    1999-06-01

    A cortical cognitive auditory evoked potential, mismatch negativity (MMN), reflects automatic discrimination and echoic memory functions of the auditory system. For this study, we examined whether this potential is dependent on the stimulus intensity. The MMN potentials were recorded from 10 subjects with normal hearing using a sine tone of 1000 Hz as the standard stimulus and a sine tone of 1141 Hz as the deviant stimulus, with probabilities of 90% and 10%, respectively. The intensities were 40, 50, 60, 70, and 80 dB HL for both standard and deviant stimuli in separate blocks. Stimulus intensity had a statistically significant effect on the mean amplitude, rise time parameter, and onset latency of the MMN. Automatic auditory discrimination seems to be dependent on the sound pressure level of the stimuli.

  15. Stimulus Complexity and Categorical Effects in Human Auditory Cortex: An Activation Likelihood Estimation Meta-Analysis

    Science.gov (United States)

    Samson, Fabienne; Zeffiro, Thomas A.; Toussaint, Alain; Belin, Pascal

    2011-01-01

    Investigations of the functional organization of human auditory cortex typically examine responses to different sound categories. An alternative approach is to characterize sounds with respect to their amount of variation in the time and frequency domains (i.e., spectral and temporal complexity). Although the vast majority of published studies examine contrasts between discrete sound categories, an alternative complexity-based taxonomy can be evaluated through meta-analysis. In a quantitative meta-analysis of 58 auditory neuroimaging studies, we examined the evidence supporting current models of functional specialization for auditory processing using grouping criteria based on either categories or spectro-temporal complexity. Consistent with current models, analyses based on typical sound categories revealed hierarchical auditory organization and left-lateralized responses to speech sounds, with high speech sensitivity in the left anterior superior temporal cortex. Classification of contrasts based on spectro-temporal complexity, on the other hand, revealed a striking within-hemisphere dissociation in which caudo-lateral temporal regions in auditory cortex showed greater sensitivity to spectral changes, while anterior superior temporal cortical areas were more sensitive to temporal variation, consistent with recent findings in animal models. The meta-analysis thus suggests that spectro-temporal acoustic complexity represents a useful alternative taxonomy to investigate the functional organization of human auditory cortex. PMID:21833294

  16. Transient sex differences during adolescence on auditory perceptual tasks.

    Science.gov (United States)

    Huyck, Julia Jones; Wright, Beverly A

    2017-06-05

    Many perceptual abilities differ between the sexes. Because these sex differences have been documented almost exclusively in adults, they have been attributed to sex-specific neural circuitry that emerges during development and is maintained in the mature perceptual system. To investigate whether behavioral sex differences in perception can also have other origins, we compared performance between males and females ranging in age from 8 to 30 years on auditory temporal-interval discrimination and tone-in-noise detection tasks on which there are no sex differences in adults. If sex differences in perception arise only from the establishment and subsequent maintenance of sex-specific neural circuitry, there should be no sex differences during development on these tasks. In contrast, sex differences emerged in adolescence but resolved by adulthood on two of the six conditions, with signs of a similar pattern on a third condition. In each case, males reached mature performance earlier than females, resulting in a sex difference in the interim. These results suggest that sex differences in perception may arise from differences in the maturational timing of common circuitry used by both sexes. They also imply that sex differences in perceptual abilities may be more prevalent than previously thought based on adult data alone. © 2017 John Wiley & Sons Ltd.

  17. Genetic Discrimination

    Science.gov (United States)

    Skip to main content Genetic Discrimination Enter Search Term(s): Español Research Funding An Overview Bioinformatics Current Grants Education and Training Funding Extramural Research News Features Funding Divisions ...

  18. SEISMIC DISCRIMINATION

    Science.gov (United States)

    a potential new discriminant , and to study depth phases. Surface- and body-wave magnitude data have been obtained and used to study regionalization...and signal equalization studies initiated. Upgrading of software and hardware facilities has continued. (Author)

  19. Electrophysiological response during auditory gap detection: Biomarker for sensory and communication alterations in autism spectrum disorder?

    OpenAIRE

    Foss-Feig, JH; Stavropoulos, KKM; McPartland, JC; Wallace, MT; Stone, WL; Key, AP

    2018-01-01

    Sensory symptoms, including auditory processing deficits, are common in autism spectrum disorder (ASD). Processing of temporal aspects of auditory input is understudied; yet, deficits in this domain could contribute to language-related impairments. In children with ASD and well-matched controls, this study examined electrophysiological response to silent gaps in auditory stimuli. Results revealed attenuated amplitude of the P2 event-related potential (ERP) component in ASD. The P2 amplitude r...

  20. Auditory Processing Training in Learning Disability - doi:10.5020/18061230.2006.p188

    Directory of Open Access Journals (Sweden)

    Nívea Franklin Chaves Martins

    2012-01-01

    Full Text Available The aim of this case report was to promote a reflection about the importance of speechtherapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30, 2002 compared with the new auditory processing test done on May 13, 2003, after one year of therapy directed to acoustic stimulation of auditory abilities disorders, in accordance with the two speech-therapy reports described at that specific period. The speech-language therapy was favorable for evolution in the processes of decoding, organization and prosody, also in auditory closure abilities, focus sound on background noise and temporal ordering problems related to learning disorders. This approach provided gains in auditory abilities and language skills of the subject who presented evolution in attention, concentration and learning levels.

  1. Suprathreshold auditory processing deficits in noise: Effects of hearing loss and age.

    Science.gov (United States)

    Kortlang, Steffen; Mauermann, Manfred; Ewert, Stephan D

    2016-01-01

    People with sensorineural hearing loss generally suffer from a reduced ability to understand speech in complex acoustic listening situations, particularly when background noise is present. In addition to the loss of audibility, a mixture of suprathreshold processing deficits is possibly involved, like altered basilar membrane compression and related changes, as well as a reduced ability of temporal coding. A series of 6 monaural psychoacoustic experiments at 0.5, 2, and 6 kHz was conducted with 18 subjects, divided equally into groups of young normal-hearing, older normal-hearing and older hearing-impaired listeners, aiming at disentangling the effects of age and hearing loss on psychoacoustic performance in noise. Random frequency modulation detection thresholds (RFMDTs) with a low-rate modulator in wide-band noise, and discrimination of a phase-jittered Schroeder-phase from a random-phase harmonic tone complex are suggested to characterize the individual ability of temporal processing. The outcome was compared to thresholds of pure tones and narrow-band noise, loudness growth functions, auditory filter bandwidths, and tone-in-noise detection thresholds. At 500 Hz, results suggest a contribution of temporal fine structure (TFS) to pure-tone detection thresholds. Significant correlation with auditory thresholds and filter bandwidths indicated an impact of frequency selectivity on TFS usability in wide-band noise. When controlling for the effect of threshold sensitivity, the listener's age significantly correlated with tone-in-noise detection and RFMDTs in noise at 500 Hz, showing that older listeners were particularly affected by background noise at low carrier frequencies. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Dopaminergic medication alters auditory distractor processing in Parkinson's disease.

    Science.gov (United States)

    Georgiev, Dejan; Jahanshahi, Marjan; Dreo, Jurij; Čuš, Anja; Pirtošek, Zvezdan; Repovš, Grega

    2015-03-01

    Parkinson's disease (PD) patients show signs of cognitive impairment, such as executive dysfunction, working memory problems and attentional disturbances, even in the early stages of the disease. Though motor symptoms of the disease are often successfully addressed by dopaminergic medication, it still remains unclear, how dopaminergic therapy affects cognitive function. The main objective of this study was to assess the effect of dopaminergic medication on visual and auditory attentional processing. 14 PD patients and 13 matched healthy controls performed a three-stimulus auditory and visual oddball task while their EEG was recorded. The patients performed the task twice, once on- and once off-medication. While the results showed no significant differences between PD patients and controls, they did reveal a significant increase in P3 amplitude on- vs. off-medication specific to processing of auditory distractors and no other stimuli. These results indicate significant effect of dopaminergic therapy on processing of distracting auditory stimuli. With a lack of between group differences the effect could reflect either 1) improved recruitment of attentional resources to auditory distractors; 2) reduced ability for cognitive inhibition of auditory distractors; 3) increased response to distractor stimuli resulting in impaired cognitive performance; or 4) hindered ability to discriminate between auditory distractors and targets. Further studies are needed to differentiate between these possibilities. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  4. Reduced auditory efferent activity in childhood selective mutism.

    Science.gov (United States)

    Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava

    2004-06-01

    Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed s