Lavasani, Azam Navaei; Mohammadkhani, Ghassem; Motamedi, Mahmoud; Karimi, Leyla Jalilvand; Jalaei, Shohreh; Shojaei, Fereshteh Sadat; Danesh, Ali; Azimi, Hadi
Auditory temporal processing is the main feature of speech processing ability. Patients with temporal lobe epilepsy, despite their normal hearing sensitivity, may present speech recognition disorders. The present study was carried out to evaluate the auditory temporal processing in patients with unilateral TLE. The present study was carried out on 25 patients with epilepsy: 11 patients with right temporal lobe epilepsy and 14 with left temporal lobe epilepsy with a mean age of 31.1years and 18 control participants with a mean age of 29.4years. The two experimental and control groups were evaluated via gap-in-noise and duration pattern sequence tests. One-way ANOVA was run to analyze the data. The mean of the threshold of the GIN test in the control group was observed to be better than that in participants with LTLE and RTLE. Also, it was observed that the percentage of correct responses on the DPS test in the control group and in participants with RTLE was better than that in participants with LTLE. Patients with TLE have difficulties in temporal processing. Difficulties are more significant in patients with LTLE, likely because the left temporal lobe is specialized for the processing of temporal information. Copyright © 2016 Elsevier Inc. All rights reserved.
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…
Full Text Available Introduction: Auditory temporal resolution and auditory temporal ordering are two major components of the auditory temporal processing abilities that contribute to speech perception and language development. Auditory temporal resolution and auditory temporal ordering can be evaluated by gap-in-noise (GIN and pitch-pattern-sequence (PPS tests, respectively. In this survey, the effect of bilingualism as a potential confounding factor on auditory temporal processing abilities was investigated in early Azari-Persian bilinguals. Materials and Methods: In this cross-sectional non-interventional study, GIN and PPS tests were performed on 24 (12 men and 12 women early Azari-Persian bilingual persons and 24 (12 men and 12 women Persian monolingual subjects in the age range of 18–30 years, with a mean age of 24.57 years in bilingual and 24.68 years in monolingual subjects. Data were analyzed with t-test using SPSS software version 16. Results: There was no statistically significant difference between mean gap threshold and mean percentages of the correct response of the GIN test and average percentage of correct responses in the PPS test between early Azari-Persian bilinguals and Persian monolinguals (P≥0.05. Conclusion: According to the findings of this study, bilingualism did not have notable effect on auditory temporal processing abilities.
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. Copyright © 2014 John Wiley & Sons, Ltd.
Leslie D Kwakye
Full Text Available Autism spectrum disorders (ASD are characterized by deficits in social reciprocity and communication, as well as repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli have also frequently been reported. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically-developing (TD children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies.
Kwakye, Leslie D.; Foss-Feig, Jennifer H.; Cascio, Carissa J.; Stone, Wendy L.; Wallace, Mark T.
Autism spectrum disorders (ASD) are characterized by deficits in social reciprocity and communication, as well as by repetitive behaviors and restricted interests. Unusual responses to sensory input and disruptions in the processing of both unisensory and multisensory stimuli also have been reported frequently. However, the specific aspects of sensory processing that are disrupted in ASD have yet to be fully elucidated. Recent published work has shown that children with ASD can integrate low-level audiovisual stimuli, but do so over an extended range of time when compared with typically developing (TD) children. However, the possible contributions of altered unisensory temporal processes to the demonstrated changes in multisensory function are yet unknown. In the current study, unisensory temporal acuity was measured by determining individual thresholds on visual and auditory temporal order judgment (TOJ) tasks, and multisensory temporal function was assessed through a cross-modal version of the TOJ task. Whereas no differences in thresholds for the visual TOJ task were seen between children with ASD and TD, thresholds were higher in ASD on the auditory TOJ task, providing preliminary evidence for impairment in auditory temporal processing. On the multisensory TOJ task, children with ASD showed performance improvements over a wider range of temporal intervals than TD children, reinforcing prior work showing an extended temporal window of multisensory integration in ASD. These findings contribute to a better understanding of basic sensory processing differences, which may be critical for understanding more complex social and cognitive deficits in ASD, and ultimately may contribute to more effective diagnostic and interventional strategies. PMID:21258617
Chen, Yu-Han; Edgar, J Christopher; Huang, Mingxiong; Hunter, Michael A; Epstein, Emerson; Howell, Breannan; Lu, Brett Y; Bustillo, Juan; Miller, Gregory A; Cañive, José M
Although magnetoencephalography (MEG) studies show superior temporal gyrus (STG) auditory processing abnormalities in schizophrenia at 50 and 100 ms, EEG and corticography studies suggest involvement of additional brain areas (e.g., frontal areas) during this interval. Study goals were to identify 30 to 130 ms auditory encoding processes in schizophrenia (SZ) and healthy controls (HC) and group differences throughout the cortex. The standard paired-click task was administered to 19 SZ and 21 HC subjects during MEG recording. Vector-based Spatial-temporal Analysis using L1-minimum-norm (VESTAL) provided 4D maps of activity from 30 to 130 ms. Within-group t-tests compared post-stimulus 50 ms and 100 ms activity to baseline. Between-group t-tests examined 50 and 100 ms group differences. Bilateral 50 and 100 ms STG activity was observed in both groups. HC had stronger bilateral 50 and 100 ms STG activity than SZ. In addition to the STG group difference, non-STG activity was also observed in both groups. For example, whereas HC had stronger left and right inferior frontal gyrus activity than SZ, SZ had stronger right superior frontal gyrus and left supramarginal gyrus activity than HC. Less STG activity was observed in SZ than HC, indicating encoding problems in SZ. Yet auditory encoding abnormalities are not specific to STG, as group differences were observed in frontal and SMG areas. Thus, present findings indicate that individuals with SZ show abnormalities in multiple nodes of a concurrently activated auditory network.
Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B
of the rate of clicks in calls. The majority of neurons (85%) were selective for click rates, and this selectivity remained unchanged over sound levels 10 to 20 dB above threshold. Selective neurons give phasic, tonic, or adapting responses to tone bursts and click trains. Some algorithms that could compute...... of auditory neurons in the laminar nucleus of the torus semicircularis (TS) of X. laevis specializes in encoding vocalization click rates. We recorded single TS units while pure tones, natural calls, and synthetic clicks were presented directly to the tympanum via a vibration-stimulation probe. Synthesized...... click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies...
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Fitzpatrick, Douglas C.; Roberts, Jason M.; Kuwada, Shigeyuki; Kim, Duck O.; Filipovic, Blagoje
Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating inter...
Talitha C. Ford
These data demonstrate a deficit in right fronto-temporal processing of an auditory change for those with more of the shared SD phenotype, indicating that right fronto-temporal auditory processing may be associated with psychosocial functioning.
Murphy, Cristina Ferraz Borges; Zachi, Elaine Cristina; Roque, Daniela Tsubota; Ventura, Dora Selma Fix; Schochat, Eliane
To investigate the existence of correlations between the performance of children in auditory temporal tests (Frequency Pattern and Gaps in Noise--GIN) and IQ, attention, memory and age measurements. Fifteen typically developing individuals between the ages of 7 to 12 years and normal hearing participated in the study. Auditory temporal processing tests (GIN and Frequency Pattern), as well as a Memory test (Digit Span), Attention tests (auditory and visual modality) and intelligence tests (RAVEN test of Progressive Matrices) were applied. Significant and positive correlation between the Frequency Pattern test and age variable were found, which was considered good (ptest and the variables tested. Auditory temporal skills seem to be influenced by different factors: while the performance in temporal ordering skill seems to be influenced by maturational processes, the performance in temporal resolution was not influenced by any of the aspects investigated.
Groen, Wouter B.; van Orsouw, Linda; ter Huurne, Niels; Swinkels, Sophie; van der Gaag, Rutger-Jan; Buitelaar, Jan K.; Zwiers, Marcel P.
The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with high-functioning-autism and 23 matched controls…
Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.
Batista, Pollyanna Barros; Lemos, Stela Maris Aguiar; Rodrigues, Luiz Oswaldo Carneiro; de Rezende, Nilton Alves
Previous findings from a case report led to the argument of whether other patients with neurofibromatosis type 1 (NF1) may have abnormal central auditory function, particularly auditory temporal processing. We hypothesized that it is associated with language and learning disabilities in this population. The aim of this study was to measure central auditory temporal function in NF1 patients and correlate it with the results of language evaluation tests. A descriptive/comparative study including 25 NF1 individuals and 22 healthy controls compared their performances on audiometric evaluation and auditory behavioral testing (Sequential Verbal Memory, Sequential Non-Verbal Memory, Frequency Pattern, Duration Pattern, and Gaps in Noise Tests). To assess language performance, two tests (phonological and syntactic awareness) were also conducted. The study showed that all participants had normal peripheral acoustic hearing. Differences were found between the NF1 and control groups in the temporal auditory processing tests [Sequential Verbal Memory (P=0.009), Sequential Non-Verbal Memory (P=0.028), Frequency Patterns (P=0.001), Duration Patterns (P=0.000), and Gaps in Noise (P=0.000)] and in language tests. The results of Pearson correlation analysis demonstrated the presence of positive correlations between the phonological awareness test and Frequency Patterns humming (r=0.560, P=0.001), Frequency Patterns labeling (r=0.415, P=0.022) and Duration Pattern humming (r=0.569, P=0.001). These results suggest that the neurofibromin deficiency found in NF1 patients is associated with auditory temporal processing deficits, which may contribute to the cognitive impairment, learning disabilities, and attention deficits that are common in this disorder. The reader will be able to: (1) describe the auditory temporal processing in patients with neurofibromatosis type 1; and (2) describe the impact of the auditory temporal deficits in language in this population. Copyright © 2014
Ceponiene, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A.; Townsend, J.
Pre-linguistic sensory deficits, especially in "temporal" processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral…
Fitzpatrick, Douglas C; Roberts, Jason M; Kuwada, Shigeyuki; Kim, Duck O; Filipovic, Blagoje
Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating interaural phase difference, while recording from neurons in the unanesthetized rabbit. We found that the cutoff frequency for neural synchronization to the binaural beat frequency (BBF) decreased between the IC and auditory cortex, and that this decrease was associated with an increase in the group delay. These features indicate that there is an increased temporal integration window in the cortex compared to the IC, complementing that seen with monaural signals. Comparable measurements of responses to amplitude modulation showed that the monaural and binaural temporal integration windows at the cortical level were quantitatively as well as qualitatively similar, suggesting that intrinsic membrane properties and afferent synapses to the cortical neurons govern the dynamic processing. The upper limits of synchronization to the BBF and the band-pass tuning characteristics of cortical neurons are a close match to human psychophysics.
Full Text Available Abstract Introduction: Stuttering is a speech fluency disorder, and may be associated with neuroaudiological factors linked to central auditory processing, including changes in auditory processing skills and temporal resolution. Objective: To characterize the temporal processing and long-latency auditory evoked potential in stutterers and to compare them with non-stutterers. Methods: The study included 41 right-handed subjects, aged 18-46 years, divided into two groups: stutterers (n = 20 and non-stutters (n = 21, compared according to age, education, and sex. All subjects were submitted to the duration pattern tests, random gap detection test, and long-latency auditory evoked potential. Results: Individuals who stutter showed poorer performance on Duration Pattern and Random Gap Detection tests when compared with fluent individuals. In the long-latency auditory evoked potential, there was a difference in the latency of N2 and P3 components; stutterers had higher latency values. Conclusion: Stutterers have poor performance in temporal processing and higher latency values for N2 and P3 components.
Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.
Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit
Dyslexia is a neuro-cognitive disorder with a strong genetic basis, characterized by a difficulty in acquiring reading skills. Several hypotheses have been suggested in an attempt to explain the origin of dyslexia, among which some have suggested that dyslexic readers might have a deficit in auditory temporal processing, while others hypothesized…
Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo
Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.
Full Text Available A epilepsia do lobo temporal ocasiona descargas elétricas excessivas onde a via auditiva tem sua estação final. É uma das formas mais comuns e de mais difícil controle da doença. O correto processamento dos estímulos auditivos necessita da integridade anatômica e funcional de todas as estruturas envolvidas na via auditiva. OBJETIVO: Verificar o Processamento Auditivo de pacientes portadores de epilepsia do lobo temporal quanto aos mecanismos de discriminação de sons em seqüência e de padrões tonais, discriminação da direção da fonte sonora e atenção seletiva para sons verbais e não-verbais. MÉTODO: Foram avaliados oito indivíduos com epilepsia do lobo temporal confirmada e com foco restrito a essa região, através dos testes auditivos especiais: Teste de Localização Sonora, Teste de Padrão de Duração, Teste Dicótico de Dígitos e Teste Dicótico Não-Verbal. O seu desempenho foi comparado ao de indivíduos sem alteração neurológica (estudo caso-controle. RESULTADO: Os sujeitos com epilepsia do lobo temporal apresentaram desempenho semelhante aos do grupo controle quanto ao mecanismo de discriminação da direção da fonte sonora e desempenho inferior quanto aos demais mecanismos avaliados. CONCLUSÃO: Indivíduos com epilepsia do lobo temporal apresentaram maior prejuízo no processamento auditivo que os sem danos corticais, de idades semelhantes.Temporal epilepsy, one of the most common presentation of this pathology, causes excessive electrical discharges in the area where we have the final station of the auditory pathway. Both the anatomical and functional integrity of the auditory pathway structures are essential for the correct processing of auditory stimuli. AIM: to check the Auditory Processing in patients with temporal lobe epilepsy regarding the auditory mechanisms of discrimination from sequential sounds and tone patterns, discrimination of the sound source direction and selective attention to verbal
Full Text Available A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons, varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0 of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS towards the temporal pole.
Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
Full Text Available Natural sounds convey perceptually relevant information over multiple timescales, and the necessary extraction of multi-timescale information requires the auditory system to work over distinct ranges. The simplest hypothesis suggests that temporal modulations are encoded in an equivalent manner within a reasonable intermediate range. We show that the human auditory system selectively and preferentially tracks acoustic dynamics concurrently at 2 timescales corresponding to the neurophysiological theta band (4-7 Hz and gamma band ranges (31-45 Hz but, contrary to expectation, not at the timescale corresponding to alpha (8-12 Hz, which has also been found to be related to auditory perception. Listeners heard synthetic acoustic stimuli with temporally modulated structures at 3 timescales (approximately 190-, approximately 100-, and approximately 30-ms modulation periods and identified the stimuli while undergoing magnetoencephalography recording. There was strong intertrial phase coherence in the theta band for stimuli of all modulation rates and in the gamma band for stimuli with corresponding modulation rates. The alpha band did not respond in a similar manner. Classification analyses also revealed that oscillatory phase reliably tracked temporal dynamics but not equivalently across rates. Finally, mutual information analyses quantifying the relation between phase and cochlear-scaled correlations also showed preferential processing in 2 distinct regimes, with the alpha range again yielding different patterns. The results support the hypothesis that the human auditory system employs (at least a 2-timescale processing mode, in which lower and higher perceptual sampling scales are segregated by an intermediate temporal regime in the alpha band that likely reflects different underlying computations.
Nicholls, Michael E R; Gora, John; Stough, Con K K
Lateralization for temporal processing was investigated using evoked potentials to an auditory and visual gap detection task in 12 dextral adults. The auditory stimuli consisted of 300-ms bursts of white noise, half of which contained an interruption lasting 4 or 6 ms. The visual stimuli consisted of 130-ms flashes of light, half of which contained a gap lasting 6 or 8 ms. The stimuli were presented bilaterally to both ears or both visual fields. Participants made a forced two-choice discrimination using a bimanual response. Manipulations of the task had no effect on the early evoked components. However, an effect was observed for a late positive component, which occurred approximately 300-400 ms following gap presentation. This component tended to be later and lower in amplitude for the more difficult stimulus conditions. An index of the capacity to discriminate gap from no-gap stimuli was gained by calculating the difference waveform between these conditions. The peak of the difference waveform was delayed for the short-gap stimuli relative to the long-gap stimuli, reflecting decreased levels of difficulty associated with the latter stimuli. Topographic maps of the difference waveforms revealed a prominence over the left hemisphere. The visual stimuli had an occipital parietal focus whereas the auditory stimuli were parietally centered. These results confirm the importance of the left hemisphere for temporal processing and demonstrate that it is not the result of a hemispatial attentional bias or a peripheral sensory asymmetry.
Azadpour, Mahan; McKay, Colette M
Auditory brainstem implants (ABI) use the same processing strategy as was developed for cochlear implants (CI). However, the cochlear nucleus (CN), the stimulation site of ABIs, is anatomically and physiologically more complex than the auditory nerve and consists of neurons with differing roles in auditory processing. The aim of this study was to evaluate the hypotheses that ABI users are less able than CI users to access speech spectro-temporal information delivered by the existing strategies and that the sites stimulated by different locations of CI and ABI electrode arrays differ in encoding of temporal patterns in the stimulation. Six CI users and four ABI users of Nucleus implants with ACE processing strategy participated in this study. Closed-set perception of aCa syllables (16 consonants) and bVd words (11 vowels) was evaluated via experimental processing strategies that activated one, two, or four of the electrodes of the array in a CIS manner as well as subjects' clinical strategies. Three single-channel strategies presented the overall temporal envelope variations of the signal on a single-implant electrode located at the high-, medium-, and low-frequency regions of the array. Implantees' ability to discriminate within electrode temporal patterns of stimulation for phoneme perception and their ability to make use of spectral information presented by increased number of active electrodes were assessed in the single- and multiple-channel strategies, respectively. Overall percentages and information transmission of phonetic features were obtained for each experimental program. Phoneme perception performance of three ABI users was within the range of CI users in most of the experimental strategies and improved as the number of active electrodes increased. One ABI user performed close to chance with all the single and multiple electrode strategies. There was no significant difference between apical, basal, and middle CI electrodes in transmitting speech
Steinbrink, Claudia; Groth, Katarina; Lachmann, Thomas; Riecker, Axel
This fMRI study investigated phonological vs. auditory temporal processing in developmental dyslexia by means of a German vowel length discrimination paradigm (Groth, Lachmann, Riecker, Muthmann, & Steinbrink, 2011). Behavioral and fMRI data were collected from dyslexics and controls while performing same-different judgments of vowel duration in…
Anderson, Carly A; Lazard, Diane S; Hartley, Douglas E H
While many individuals can benefit substantially from cochlear implantation, the ability to perceive and understand auditory speech with a cochlear implant (CI) remains highly variable amongst adult recipients. Importantly, auditory performance with a CI cannot be reliably predicted based solely on routinely obtained information regarding clinical characteristics of the CI candidate. This review argues that central factors, notably cortical function and plasticity, should also be considered as important contributors to the observed individual variability in CI outcome. Superior temporal cortex (STC), including auditory association areas, plays a crucial role in the processing of auditory and visual speech information. The current review considers evidence of cortical plasticity within bilateral STC, and how these effects may explain variability in CI outcome. Furthermore, evidence of audio-visual interactions in temporal and occipital cortices is examined, and relation to CI outcome is discussed. To date, longitudinal examination of changes in cortical function and plasticity over the period of rehabilitation with a CI has been restricted by methodological challenges. The application of functional near-infrared spectroscopy (fNIRS) in studying cortical function in CI users is becoming increasingly recognised as a potential solution to these problems. Here we suggest that fNIRS offers a powerful neuroimaging tool to elucidate the relationship between audio-visual interactions, cortical plasticity during deafness and following cochlear implantation, and individual variability in auditory performance with a CI. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Grube, Manon; Cooper, Freya E; Griffiths, Timothy D
This work tests the hypothesis that language skill depends on the ability to incorporate streams of sound into an accurate temporal framework. We tested the ability of young English-speaking adults to process single time intervals and rhythmic sequences of such intervals, hypothesized to be relevant to the analysis of the temporal structure of language. The data implicate a specific role for the ability to process beat-based temporal regularities in phonological language and literacy skill.
Christiansen, Simon Krogholt
The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...
Full Text Available Abstract Background Sensory consequences of our own actions are perceived differently from the sensory stimuli that are generated externally. The present event-related potential (ERP study examined the neural responses to self-triggered stimulation relative to externally-triggered stimulation as a function of delays between the motor act and the stimulus onset. While sustaining a vowel phonation, subjects clicked a mouse and heard pitch-shift stimuli (PSS in voice auditory feedback at delays of either 0 ms (predictable or 500–1000 ms (unpredictable. The motor effect resulting from the mouse click was corrected in the data analyses. For the externally-triggered condition, PSS were delivered by a computer with a delay of 500–1000 ms after the vocal onset. Results As compared to unpredictable externally-triggered PSS, P2 responses to predictable self-triggered PSS were significantly suppressed, whereas an enhancement effect for P2 responses was observed when the timing of self-triggered PSS was unpredictable. Conclusions These findings demonstrate the effect of the temporal predictability of stimulus delivery with respect to the motor act on the neural responses to self-triggered stimulation. Responses to self-triggered stimulation were suppressed or enhanced compared with the externally-triggered stimulation when the timing of stimulus delivery was predictable or unpredictable. Enhancement effect of unpredictable self-triggered stimulation in the present study supports the idea that sensory suppression of self-produced action may be primarily caused by an accurate prediction of stimulus timing, rather than a movement-related non-specific suppression.
Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.
Mishra, Srikanta K; Panda, Manas R; Herbert, Carolyn
Many features of auditory perception are positively altered in musicians. Traditionally auditory mechanisms in musicians are investigated using the Western-classical musician model. The objective of the present study was to adopt an alternative model-Indian-classical music-to further investigate auditory temporal processing in musicians. This study presents that musicians have significantly lower across-channel gap detection thresholds compared to nonmusicians. Use of the South Indian musician model provides an increased external validity for the prediction, from studies on Western-classical musicians, that auditory temporal coding is enhanced in musicians.
Share, David L.; Jorm, Anthony F.; Maclean, Rod; Matthews, Russell
Examines the hypothesis that early auditory temporal processing deficits cause later specific reading disability by impairing phonological processing. Suggests that auditory temporal deficits in dyslexics may be associated with dysphasic-type symptoms observed by Tallal and her colleagues in specific language-impaired populations, but do not cause…
, sound localization, and auditory closure, and to investigate possible associations with complaints of learning, communication and language difficulties in individuals with unilateral hearing loss. METHODS: Participants were 26 individuals with ages between 8 and 15 years, divided into two groups: Unilateral hearing loss group; and Normal hearing group. Each group was composed of 13 individuals, matched by gender, age and educational level. All subjects were submitted to anamnesis, peripheral hearing evaluation, and auditory processing evaluation through behavioral tests of sound localization, sequential memory, Random Detection Gap test, and speech-in-noise test. Nonparametric statistical tests were used to compare the groups, considering the presence or absence of hearing loss and the ear with hearing loss. RESULTS: Unilateral hearing loss started during preschool, and had unknown or identified etiologies, such as meningitis, traumas or mumps. Most individuals reported delays in speech, language and learning developments, especially those with hearing loss in the right ear. The group with hearing loss had worse responses in the abilities of temporal ordering and resolution, sound localization and auditory closure. Individuals with hearing loss in the left ear showed worse results than those with hearing loss in the right ear in all abilities, except in sound localization. CONCLUSION: The presence of unilateral hearing loss causes sound localization, auditory closure, temporal ordering and temporal resolution difficulties. Individuals with unilateral hearing loss in the right ear have more complaints than those with unilateral hearing loss in the left ear. Individuals with hearing loss in the left ear have more difficulties in auditory closure, temporal resolution, and temporal ordering.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Füllgrabe, Christian; Moore, Brian C J; Stone, Michael A
Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60-79 years) participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125-6 kHz were matched to nine young (YNH; 18-27 years) participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1) identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2) identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3) detection of modulation of the temporal envelope (TE) at frequencies 5-180 Hz; (4) monaural and binaural sensitivity to temporal fine structure (TFS); (5) various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (the improvement in speech identification obtained by amplitude modulating a noise background) and spatial masking release (the benefit obtained from spatially separating masker and target speech) were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN) identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric
Efron, R; Crandall, P H; Koss, B; Divenyi, P L; Yund, E W
The capacity to selectively attend to only one of multiple, spatially separated. simultaneous sound sources--the "cocktail party" effect--was evaluated in normal subjects and in those with anterior temporal lobectomy using common environmental sounds. A significant deficit in this capacity was observed for those stimuli located on the side of space contralateral to the lobectomy, a finding consistent with the hypothesis that within each anterior temporal lobe is a mechanism that is normally capable of enhancing the perceptual salience of one acoustic stimulus on the opposite side of space, when other sound sources are present on that side. Damage to this mechanism also appears to be associated with a deficit of spatial localization for sounds contralateral to the lesion.
Alessandra Giannella Samelli
Full Text Available TEMA: processamento auditivo temporal e resolução temporal. OBJETIVO: realizar revisão teórica sobre processamento auditivo e resolução temporal, bem como sobre os diferentes parâmetros de marcadores utilizados em testes de detecção de gap e como eles podem interferir na determinação dos limiares. CONCLUSÃO: o processamento auditivo e a resolução temporal são fundamentais para o desenvolvimento da linguagem. Em virtude dos diferentes parâmetros que podem ser utilizados no teste em questão, os limiares de detecção de gap podem variar consideravelmente.BACKGROUND: temporal auditory processing and temporal resolution. PURPOSE: promote a theoretical approach on auditory processing, temporal resolution, and different parameters of markers used at various gap detection tests and how they can interfere in threshold determination. CONCLUSION: auditory processing and temporal resolution are key-factors for language development. The diverse parameters that can be used in the study of gap detection thresholds can result in quite discrepant thresholds.
Anton Ludwig Beer
Full Text Available Functional magnetic resonance imaging (MRI showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS, the lateral superior temporal gyrus (lSTG, and the extrastriate body area (EBA. A region-of-interest analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital, the inferior-occipital cortex, and the superior temporal sulcus (STS. However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.
Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.
van Kesteren, Marlieke T. R.; Wierslnca-Post, J. Esther C.
Purpose: Several studies on auditory temporal-order processing showed gender differences. Women needed longer inter-stimulus intervals than men when indicating the temporal order of two clicks presented to the left and right ear. In this study, we examined whether we could reproduce these results in
Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.
An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...
Full Text Available Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation.
Song, Kun; Luo, Huan
Memory is a constructive and organizational process. Instead of being stored with all the fine details, external information is reorganized and structured at certain spatiotemporal scales. It is well acknowledged that time plays a central role in audition by segmenting sound inputs into temporal chunks of appropriate length. However, it remains largely unknown whether critical temporal structures exist to mediate sound representation in auditory memory. To address the issue, here we designed an auditory memory transferring study, by combining a previously developed unsupervised white noise memory paradigm with a reversed sound manipulation method. Specifically, we systematically measured the memory transferring from a random white noise sound to its locally temporal reversed version on various temporal scales in seven experiments. We demonstrate a U-shape memory-transferring pattern with the minimum value around temporal scale of 200 ms. Furthermore, neither auditory perceptual similarity nor physical similarity as a function of the manipulating temporal scale can account for the memory-transferring results. Our results suggest that sounds are not stored with all the fine spectrotemporal details but are organized and structured at discrete temporal chunks in long-term auditory memory representation. PMID:28674512
Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.
Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten
kHz) and steeply sloping hearing losses above 1 kHz. For comparison, data were also collected for five normalhearing listeners. Temporal processing was addressed at low frequencies by means of psychoacoustical frequency discrimination, binaural masked detection and amplitude modulation (AM...
Barbour, Dennis L; Wang, Xiaoqin
Natural sounds often contain energy over a broad spectral range and consequently overlap in frequency when they occur simultaneously; however, such sounds under normal circumstances can be distinguished perceptually (e.g., the cocktail party effect). Sound components arising from different sources have distinct (i.e., incoherent) modulations, and incoherence appears to be one important cue used by the auditory system to segregate sounds into separately perceived acoustic objects. Here we show that, in the primary auditory cortex of awake marmoset monkeys, many neurons responsive to amplitude- or frequency-modulated tones at a particular carrier frequency [the characteristic frequency (CF)] also demonstrate sensitivity to the relative modulation phase between two otherwise identically modulated tones: one at CF and one at a different carrier frequency. Changes in relative modulation phase reflect alterations in temporal coherence between the two tones, and the most common neuronal response was found to be a maximum of suppression for the coherent condition. Coherence sensitivity was generally found in a narrow frequency range in the inhibitory portions of the frequency response areas (FRA), indicating that only some off-CF neuronal inputs into these cortical neurons interact with on-CF inputs on the same time scales. Over the population of neurons studied, carrier frequencies showing coherence sensitivity were found to coincide with the carrier frequencies of inhibition, implying that inhibitory inputs create the effect. The lack of strong coherence-induced facilitation also supports this interpretation. Coherence sensitivity was found to be greatest for modulation frequencies of 16-128 Hz, which is higher than the phase-locking capability of most cortical neurons, implying that subcortical neurons could play a role in the phenomenon. Collectively, these results reveal that auditory cortical neurons receive some off-CF inputs temporally matched and some temporally
Full Text Available Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 minutes. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 seconds. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys’ auditory memory performance. It is possible, therefore, that the anatomical pathways differ. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC. We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG, and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY and anterograde (10% BDA 10,000 MW tracer injections in rSTG and the dorsolateral area 38DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex, and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys.
Billiet, Cassandra R.; Bellis, Teri James
Purpose: Studies using speech stimuli to elicit electrophysiologic responses have found approximately 30% of children with language-based learning problems demonstrate abnormal brainstem timing. Research is needed regarding how these responses relate to performance on behavioral tests of central auditory function. The purpose of the study was to…
Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.
The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, and found systematic depth dependencies in responses to second-and-later noise bursts in slow (1–10 bursts/s) trains of noise bursts. At all depths, responses to noise bursts within a train usually decreased with increasing train rate; however, the rolloff with increasing train rate occurred at faster rates in more superficial layers. Moreover, in some recordings from mid-to-superficial layers, responses to noise bursts within a 3–4 bursts/s train were stronger than responses to noise bursts in slower trains. This non-monotonicity with train rate was especially pronounced in more superficial layers of the anterior auditory field, where responses to noise bursts within the context of a slow train were sometimes even stronger than responses to the noise burst at train onset. These findings may reflect depth dependence in suppression and recovery of cortical activity following a stimulus, which we suggest could arise from laminar differences in synaptic depression at feedforward and recurrent synapses. PMID:21900562
Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.
Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Pecenka, Nadine; Engel, Annerose; Keller, Peter E
Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events) and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS) and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons). Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1) a distributed network of cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex) and (2) medial cortical areas (medial prefrontal cortex, posterior cingulate cortex). While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...
The Handbook of Signal Processing in Acoustics will compile the techniques and applications of signal processing as they are used in the many varied areas of Acoustics. The Handbook will emphasize the interdisciplinary nature of signal processing in acoustics. Each Section of the Handbook...... will present topics on signal processing which are important in a specific area of acoustics. These will be of interest to specialists in these areas because they will be presented from their technical perspective, rather than a generic engineering approach to signal processing. Non-specialists, or specialists...
... APD is common in older adults, particularly when hearing loss is present. It is likely that many processes and problems contribute to APD in children. In adults, neurological disorders such as stroke, tumors, degenerative disease (such as ...
Lee, Hweeling; Noppeney, Uta
To form a coherent percept of the environment, the brain needs to bind sensory signals emanating from a common source, but to segregate those from different sources . Temporal correlations and synchrony act as prominent cues for multisensory integration [2-4], but the neural mechanisms by which such cues are identified remain unclear. Predictive coding suggests that the brain iteratively optimizes an internal model of its environment by minimizing the errors between its predictions and the sensory inputs [5,6]. This model enables the brain to predict the temporal evolution of natural audiovisual inputs and their statistical (for example, temporal) relationship. A prediction of this theory is that asynchronous audiovisual signals violating the model's predictions induce an error signal that depends on the directionality of the audiovisual asynchrony. As the visual system generates the dominant temporal predictions for visual leading asynchrony, the delayed auditory inputs are expected to generate a prediction error signal in the auditory system (and vice versa for auditory leading asynchrony). Using functional magnetic resonance imaging (fMRI), we measured participants' brain responses to synchronous, visual leading and auditory leading movies of speech, sinewave speech or music. In line with predictive coding, auditory leading asynchrony elicited a prediction error in visual cortices and visual leading asynchrony in auditory cortices. Our results reveal predictive coding as a generic mechanism to temporally bind signals from multiple senses into a coherent percept. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pillion, Joseph P; Shiffler, Dorothy E; Hoon, Alexander H; Lin, Doris D M
To describe auditory function in an individual with bilateral damage to the temporal and parietal cortex. Case report. A previously healthy 17-year old male is described who sustained extensive cortical injury following an episode of viral meningoencephalitis. He developed status epilepticus and required intubation and multiple anticonvulsants. Serial brain MRIs showed bilateral temporoparietal signal changes reflecting extensive damage to language areas and the first transverse gyrus of Heschl on both sides. The patient was referred for assessment of auditory processing but was so severely impaired in speech processing that he was unable to complete any formal tests of his speech processing abilities. Audiological assessment utilizing objective measures of auditory function established the presence of normal peripheral auditory function and illustrates the importance of the use of objective measures of auditory function in patients with injuries to the auditory cortex. Use of objective measures of auditory function is essential in establishing the presence of normal peripheral auditory function in individuals with cortical damage who may not be able to cooperate sufficiently for assessment utilizing behavioral measures of auditory function.
Rajendran, Vani G; Teki, Sundeep; Schnupp, Jan W H
Music is a curious example of a temporally patterned acoustic stimulus, and a compelling pan-cultural phenomenon. This review strives to bring some insights from decades of music psychology and sensorimotor synchronization (SMS) literature into the mainstream auditory domain, arguing that musical rhythm perception is shaped in important ways by temporal processing mechanisms in the brain. The feature that unites these disparate disciplines is an appreciation of the central importance of timing, sequencing, and anticipation. Perception of musical rhythms relies on an ability to form temporal predictions, a general feature of temporal processing that is equally relevant to auditory scene analysis, pattern detection, and speech perception. By bringing together findings from the music and auditory literature, we hope to inspire researchers to look beyond the conventions of their respective fields and consider the cross-disciplinary implications of studying auditory temporal sequence processing. We begin by highlighting music as an interesting sound stimulus that may provide clues to how temporal patterning in sound drives perception. Next, we review the SMS literature and discuss possible neural substrates for the perception of, and synchronization to, musical beat. We then move away from music to explore the perceptual effects of rhythmic timing in pattern detection, auditory scene analysis, and speech perception. Finally, we review the neurophysiology of general timing processes that may underlie aspects of the perception of rhythmic patterns. We conclude with a brief summary and outlook for future research. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Arne Freerk Meyer
Full Text Available Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF estimates with characteristic temporal resolution 5 s to 30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
Mishra, Srikanta K; Panda, Manasa R
Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.
Kaminska, A; Delattre, V; Laschet, J; Dubois, J; Labidurie, M; Duval, A; Manresa, A; Magny, J-F; Hovhannisyan, S; Mokhtari, M; Ouss, L; Boissel, A; Hertz-Pannier, L; Sintsov, M; Minlebaev, M; Khazipov, R; Chiron, C
Characteristic preterm EEG patterns of "Delta-brushes" (DBs) have been reported in the temporal cortex following auditory stimuli, but their spatio-temporal dynamics remains elusive. Using 32-electrode EEG recordings and co-registration of electrodes' position to 3D-MRI of age-matched neonates, we explored the cortical auditory-evoked responses (AERs) after 'click' stimuli in 30 healthy neonates aged 30-38 post-menstrual weeks (PMW). (1) We visually identified auditory-evoked DBs within AERs in all the babies between 30 and 33 PMW and a decreasing response rate afterwards. (2) The AERs showed an increase in EEG power from delta to gamma frequency bands over the middle and posterior temporal regions with higher values in quiet sleep and on the right. (3) Time-frequency and averaging analyses showed that the delta component of DBs, which negatively peaked around 550 and 750 ms over the middle and posterior temporal regions, respectively, was superimposed with fast (alpha-gamma) oscillations and corresponded to the late part of the cortical auditory-evoked potential (CAEP), a feature missed when using classical CAEP processing. As evoked DBs rate and AERs delta to alpha frequency power decreased until full term, auditory-evoked DBs are thus associated with the prenatal development of auditory processing and may suggest an early emerging hemispheric specialization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Fostick, Leah; Babkoff, Harvey
Some researchers suggested that one central mechanism is responsible for temporal order judgments (TOJ), within and across sensory channels. This suggestion is supported by findings of similar TOJ thresholds in same modality and cross-modality TOJ tasks. In the present study, we challenge this idea by analyzing and comparing the threshold distributions of the spectral and spatial TOJ tasks. In spectral TOJ, the tones differ in their frequency ("high" and "low") and are delivered either binaurally or monaurally. In spatial (or dichotic) TOJ, the two tones are identical but are presented asynchronously to the two ears and thus differ with respect to which ear received the first tone and which ear received the second tone ("left"/"left"). Although both tasks are regarded as measures of auditory temporal processing, a review of data published in the literature suggests that they trigger different patterns of response. The aim of the current study was to systematically examine spectral and spatial TOJ threshold distributions across a large number of studies. Data are based on 388 participants in 13 spectral TOJ experiments, and 222 participants in 9 spatial TOJ experiments. None of the spatial TOJ distributions deviated significantly from the Gaussian; while all of the spectral TOJ threshold distributions were skewed to the right, with more than half of the participants accurately judging temporal order at very short interstimulus intervals (ISI). The data do not support the hypothesis that 1 central mechanism is responsible for all temporal order judgments. We suggest that different perceptual strategies are employed when performing spectral TOJ than when performing spatial TOJ. We posit that the spectral TOJ paradigm may provide the opportunity for two-tone masking or temporal integration, which is sensitive to the order of the tones and thus provides perceptual cues that may be used to judge temporal order. This possibility should be considered when interpreting
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Yathiraj, Asha; Maggu, Akshay Raj
The presence of auditory processing disorder in school-age children has been documented (Katz and Wilde, 1985; Chermak and Musiek, 1997; Jerger and Musiek, 2000; Muthuselvi and Yathiraj, 2009). In order to identify these children early, there is a need for a screening test that is not very time-consuming. The present study aimed to evaluate the independence of four subsections of the Screening Test for Auditory Processing (STAP) developed by Yathiraj and Maggu (2012). The test was designed to address auditory separation/closure, binaural integration, temporal resolution, and auditory memory in school-age children. The study also aimed to examine the number of children who are at risk for different auditory processes. Factor analysis research design was used in the current study. Four hundred school-age children consisting of 218 males and 182 females were randomly selected from 2400 children attending three schools. The children, aged 8 to 13 yr, were in grade three to eight class placements. DATA COLLECTION AND ANALYSES: The children were evaluated on the four subsections of the STAP (speech perception in noise, dichotic consonant-vowel [CV], gap detection, and auditory memory) in a quiet room within their school. The responses were analyzed using principal component analysis (PCA) and confirmatory factor analysis (CFA). In addition, the data were also analyzed to determine the number of children who were at risk for an auditory processing disorder (APD). Based on the PCA, three components with Eigen values greater than 1 were extracted. The orthogonal rotation of the variables using the Varimax technique revealed that component 1 consisted of binaural integration, component 2 consisted of temporal resolution, and component 3 was shared by auditory separation/closure and auditory memory. These findings were confirmed using CFA, where the predicted model displayed a good fit with or without the inclusion of the auditory memory subsection. It was determined that 16
Rimmele, Johanna Maria; Sussman, Elyse; Poeppel, David
Listening situations with multiple talkers or background noise are common in everyday communication and are particularly demanding for older adults. Here we review current research on auditory perception in aging individuals in order to gain insights into the challenges of listening under noisy conditions. Informationally rich temporal structure in auditory signals--over a range of time scales from milliseconds to seconds--renders temporal processing central to perception in the auditory domain. We discuss the role of temporal structure in auditory processing, in particular from a perspective relevant for hearing in background noise, and focusing on sensory memory, auditory scene analysis, and speech perception. Interestingly, these auditory processes, usually studied in an independent manner, show considerable overlap of processing time scales, even though each has its own 'privileged' temporal regimes. By integrating perspectives on temporal structure processing in these three areas of investigation, we aim to highlight similarities typically not recognized. Copyright © 2014 Elsevier B.V. All rights reserved.
Keilmann, A; Läßig, A K; Nospes, S
The definition of an auditory processing disorder (APD) is based on impairments of auditory functions. APDs are disturbances in processes central to hearing that cannot be explained by comorbidities such as attention deficit or language comprehension disorders. Symptoms include difficulties in differentiation and identification of changes in time, structure, frequency and intensity of sounds; problems with sound localization and lateralization, as well as poor speech comprehension in adverse listening environments and dichotic situations. According to the German definition of APD (as opposed to central auditory processing disorder, CAPD), peripheral hearing loss or cognitive impairment also exclude APD. The diagnostic methodology comprises auditory function tests and the required diagnosis of exclusion. APD is diagnosed if a patient's performance is two standard deviations below the normal mean in at least two areas of auditory processing. The treatment approach for an APD depends on the patient's particular deficits. Training, compensatory strategies and improvement of the listening conditions can all be effective.
, much is still unknown of how temporal information is analyzed and represented in the auditory system. The PhD lecture concerns the topic of temporal processing in hearing and the topic is approached via four different listening experiments designed to probe several aspects of temporal processing......) temporal pattern recognition where listeners have to identify properties of the actual patterns of level changes. Typically temporal processing is modeled by some sort of temporal summation or integration device. The results of the present experiments are to a large extent incompatible with this modeling......An important property of sound is its variation as a function of time, which carries much relevant information about the origin of a given sound. Further, in analyzing the ?meaning? of a sound perceptually, the temporal variation is of tremendous importance. In spite of its perceptual importance...
that for very short reflection delays (8ms) the detectability of the test reflection is binaurally enhanced [Buchholz, JASA, 2005]. Considering the auditory processes underlying room reflection masking, increasing the reflection delay on the one hand changes the spectral and temporal characteristics...... of the stimulus and on the other hand produces an increasing forward fringe (i.e., a reflection-free direct sound interval). Throughout this study, it is investigated to what extent the auditory processes underlying simultaneous room reflection masking utilize the information provided by (a) the reflection...
Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali
Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.
Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin
The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.
Durante, Alessandra Spada; Massa, Beatriz; Pucci, Beatriz; Gudayol, Nicolly; Gameiro, Marcella; Lopes, Cristiane
To determine the effect of passive smoking on auditory temporal resolution in primary school children, based on the hypothesis that individuals who are exposed to smoking exhibit impaired performance. Auditory temporal resolution was evaluated using the Gaps In Noise (GIN) test. Exposure to passive smoking was assessed by measuring nicotine metabolite (cotinine) excreted in the first urine of the day. The study included 90 children with mean age of 10.2 ± 0.1 years old from a public school in São Paulo. Participants were divided into two groups: a study group, comprising 45 children exposed to passive smoking (cotinine > 5 ng/mL); and a control group, constituting 45 children who were not exposed to passive smoking. All participants had normal audiometry and immittance test results. Statistically significant differences (p passive smoking had poorer performance both in terms of thresholds and correct responses percentage on auditory temporal resolution assessment. Copyright © 2017 Elsevier B.V. All rights reserved.
Yamamoto, Kosuke; Kawabata, Hideaki
We ordinarily speak fluently, even though our perceptions of our own voices are disrupted by various environmental acoustic properties. The underlying mechanism of speech is supposed to monitor the temporal relationship between speech production and the perception of auditory feedback, as suggested by a reduction in speech fluency when the speaker is exposed to delayed auditory feedback (DAF). While many studies have reported that DAF influences speech motor processing, its relationship to the temporal tuning effect on multimodal integration, or temporal recalibration, remains unclear. We investigated whether the temporal aspects of both speech perception and production change due to adaptation to the delay between the motor sensation and the auditory feedback. This is a well-used method of inducing temporal recalibration. Participants continually read texts with specific DAF times in order to adapt to the delay. Then, they judged the simultaneity between the motor sensation and the vocal feedback. We measured the rates of speech with which participants read the texts in both the exposure and re-exposure phases. We found that exposure to DAF changed both the rate of speech and the simultaneity judgment, that is, participants' speech gained fluency. Although we also found that a delay of 200 ms appeared to be most effective in decreasing the rates of speech and shifting the distribution on the simultaneity judgment, there was no correlation between these measurements. These findings suggest that both speech motor production and multimodal perception are adaptive to temporal lag but are processed in distinct ways.
... Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is Voice? What Is Speech? What Is Language? ... communication provides better outcomes for children with cochlear implants University of Texas at Dallas ...
Ellen de Wit
Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion
Qiu, Anqi; Schreiner, Christoph E; Escabí, Monty A
The spectro-temporal receptive field (STRF) is a model representation of the excitatory and inhibitory integration area of auditory neurons. Recently it has been used to study spectral and temporal aspects of monaural integration in auditory centers. Here we report the properties of monaural STRFs and the relationship between ipsi- and contralateral inputs to neurons of the central nucleus of cat inferior colliculus (ICC) of cats. First, we use an optimal singular-value decomposition method to approximate auditory STRFs as a sum of time-frequency separable Gabor functions. This procedure extracts nine physiologically meaningful parameters. The STRFs of approximately 60% of collicular neurons are well described by a time-frequency separable Gabor STRF model, whereas the remaining neurons exhibited obliquely oriented or multiple excitatory/inhibitory subfields that require a nonseparable Gabor fitting procedure. Parametric analysis reveals distinct spectro-temporal tradeoffs in receptive field size and modulation filtering resolution. Comparisons between an identical model used to study spatio-temporal integration areas of visual neurons further shows that auditory and visual STRFs share numerous structural properties. We then use the Gabor STRF model to compare quantitatively receptive field properties of contra- and ipsilateral inputs to the ICC. We show that most interaural STRF parameters are highly correlated bilaterally. However, the spectral and temporal phases of ipsi- and contralateral STRFs often differ significantly. This suggests that activity originating from each ear share various spectro-temporal response properties such as their temporal delay, bandwidth, and center frequency but have shifted or interleaved patterns of excitation and inhibition. These differences in converging monaural receptive fields expand binaural processing capacity beyond interaural time and intensity aspects and may enable colliculus neurons to detect disparities in the spectro-temporal
Alpherts, W.C.J.; Vermeulen, J.; Franken, M.L.O.; Hendriks, M.P.H.; Veelen, C.W.M. van; Rijen, P.C. van
In the visual modality, short rhythmic stimuli ha c been proven to he better processed (sequentially) by the left hemisphere, while longer rhythms appear to he better (holistically) processed by the right hemisphere. This study was set up to see it the same holds in the auditory modality. The rhythm
Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten
The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422......-438, (2008)] was used as a front end of a model for auditory stream segregation. A temporal coherence analysis [M. Elhilali, C. Ling, C. Micheyl, A. J. Oxenham and S. Shamma, Neuron. 61, 317-329, (2009)] was applied at the output of the preprocessing, using the coherence across tonotopic channels to group...
Behroozmand, Roozbeh; Phillip, Lorelei; Johari, Karim; Bonilha, Leonardo; Rorden, Chris; Hickok, Gregory; Fridriksson, Julius
We investigated the brain network involved in speech sensorimotor processing by studying patients with post-stroke aphasia using an altered auditory feedback (AAF) paradigm. We combined lesion-symptom-mapping analysis and behavioral testing to examine the pervasiveness of speech sensorimotor deficits and their relationship with cortical damage. Sixteen participants with aphasia and sixteen neurologically intact individuals completed a speech task under AAF. The task involved producing speech vowel sounds under the real-time pitch-shifted auditory feedback alteration. This task provided an objective measure for each individual's ability to compensate for mismatch (error) in speech auditory feedback. Results indicated that compensatory speech responses to AAF were significantly diminished in participants with aphasia compared with control. We observed that within the aphasic group, subjects with lower scores on the speech repetition task exhibited greater degree of diminished responses. Lesion-symptom-mapping analysis revealed that the onset phase (50-150 ms) of diminished AAF responses were predicted by damage to auditory cortical regions within the superior and middle temporal gyrus, whereas the rising phase (150-250 ms) and the peak (250-350 ms) of diminished AAF responses were predicted with damage to the inferior frontal gyrus and supramarginal gyrus areas, respectively. These findings suggest that damage to the auditory, motor, and auditory-motor integration networks are associated with impaired sensorimotor function for speech error processing. We suggest that a sensorimotor integration network, as revealed by brain regions related to temporal specific components of AAF responses, is related to speech processing and specific aspects of speech impairment, notably repetition deficits, in individuals with aphasia. Copyright © 2017 Elsevier Inc. All rights reserved.
The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Parving, A; Salomon, G; Elberling, Claus
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...
Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A
The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. Copyright © 2016 Elsevier Inc. All rights reserved.
Araújo, Letícia Maria Martins; Feniman, Mariza Ribeiro; Carvalho, Fernanda Ribeiro Pinto de; Lopes-Herrera, Simone Aparecida
interrelation of phonetics, phonology and auditory processing in English Language Teaching. to determine whether prior contact with English phonetics favors general learning of this language (L2), i.e. second language, in Portuguese speakers; to verify performance of these individuals in an auditory processing test prior to and after being taught L2. participants of the study were eight college students who had only studied English in high school. These participants were divided into two groups: control group - were only enrolled in English classes; experimental group - were enrolled in English phonetic classes prior to their enrollment in English classes. Participants were submitted to an auditory processing test and to an oral test in English (Oral Test) prior to and after the classes. Data were analyzed in the same way, i.e. prior to and after the classes. these were expressed statistically by T-Student's test. Analyses indicated no difference in performance between groups. Scores indicated better performance of the control group for answering questions in English in the Oral Test. The experimental group had better performance in the auditory processing test after being enrolled to English phonetic classes and English course. prior basic knowledge of English did not enhance general learning (improvement in pronunciation) of the second language, however, it improved the ability of temporal processing in the used test.
Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher
Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....
Demanez, L; Dony-Closon, B; Lhonneux-Ledoux, E; Demanez, J P
Based on the American Speech-Language-Hearing Association (ASHA) Consensus Statement on central auditory processing and models for their exploration, a battery of audiological tests (Bilan Auditif Central--BAC) has been designed in French. The BAC consists of four types of psycho-acoustic tests: a speech-in-noise test, a dichotic test, a temporal processing test and a binaural interaction test. We briefly describe the rationale of these tests. The BAC is available in digital format. Descriptive statistics were computed on data obtained from 668 subjects divided into 15 age-groups ranging from 5 to 85 years old or over. All subjects had no complaints regarding hearing loss, normal tonal audiometry, and normal intelligence. Tests scores of the speech-in-noise test, the dichotic test and the binaural interaction test showed a normal distribution. Test scores of the temporal processing test did not follow a normal distribution. Effects of maturation and involution were clearly visible for all tests. The low correlation between scores obtained from the four tests pointed to the need for a battery of several tests to assess central auditory processing. We claim that the reported scores represent standard norms for the normal French-speaking population, and believe that the tests will be useful for evaluation of central auditory processing.
Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...
Bigelow, James; Ng, Chi-Wing; Poremba, Amy
Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special
Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural....... The purpose of the present thesis is to develop a computational auditory processing model that accounts for a large variety of experimental data on CMR, in order to obtain a more thorough understanding of the basic processing principles underlying the processing of across-frequency modulations. The second...... grouping can influence the results in conditions where the processing in the auditory system is dominated by across-channel comparisons. Overall, this thesis provides insights into the specific mechanisms involved in the perception of comodulated sounds. The results are important as a basis for future...
Idiazábal-Aletxa, M A; Saperas-Rodríguez, M
Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.
Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal auditory field as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.
Stephen, Julia M; Kodituwakku, Piyadasa W; Kodituwakku, Elizabeth L; Romero, Lucinda; Peters, Amanda M; Sharadamma, Nirupama M; Caprihan, Arvind; Coffman, Brian A
Both sensory and cognitive deficits have been associated with prenatal exposure to alcohol; however, very few studies have focused on sensory deficits in preschool-aged children. As sensory skills develop early, characterization of sensory deficits using novel imaging methods may reveal important neural markers of prenatal alcohol exposure. Participants in this study were 10 children with a fetal alcohol spectrum disorder (FASD) and 15 healthy control (HC) children aged 3 to 6 years. All participants had normal hearing as determined by clinical screens. We measured their neurophysiological responses to auditory stimuli (1,000 Hz, 72 dB tone) using magnetoencephalography (MEG). We used a multidipole spatio-temporal modeling technique to identify the location and timecourse of cortical activity in response to the auditory tones. The timing and amplitude of the left and right superior temporal gyrus sources associated with activation of left and right primary/secondary auditory cortices were compared across groups. There was a significant delay in M100 and M200 latencies for the FASD children relative to the HC children (p = 0.01), when including age as a covariate. The within-subjects effect of hemisphere was not significant. A comparable delay in M100 and M200 latencies was observed in children across the FASD subtypes. Auditory delay revealed by MEG in children with FASDs may prove to be a useful neural marker of information processing difficulties in young children with prenatal alcohol exposure. The fact that delayed auditory responses were observed across the FASD spectrum suggests that it may be a sensitive measure of alcohol-induced brain damage. Therefore, this measure in conjunction with other clinical tools may prove useful for early identification of alcohol affected children, particularly those without dysmorphia. Copyright © 2012 by the Research Society on Alcoholism.
Pierfilippo De Sanctis
Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.
Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten
The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892–2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al......., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset...
Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy
Temporal pole (TP) cortex is associated with higher-order sensory perception and/or recognition memory, as human patients with damage in this region show impaired performance during some tasks requiring recognition memory (Olson et al. 2007). The underlying mechanisms of TP processing are largely based on examination of the visual nervous system in humans and monkeys, while little is known about neuronal activity patterns in the auditory portion of this region, dorsal TP (dTP; Poremba et al. 2003). The present study examines single-unit activity of dTP in rhesus monkeys performing a delayed matching-to-sample task utilizing auditory stimuli, wherein two sounds are determined to be the same or different. Neurons of dTP encode several task-relevant events during the delayed matching-to-sample task, and encoding of auditory cues in this region is associated with accurate recognition performance. Population activity in dTP shows a match suppression mechanism to identical, repeated sound stimuli similar to that observed in the visual object identification pathway located ventral to dTP (Desimone 1996; Nakamura and Kubota 1996). However, in contrast to sustained visual delay-related activity in nearby analogous regions, auditory delay-related activity in dTP is transient and limited. Neurons in dTP respond selectively to different sound stimuli and often change their sound response preferences between experimental contexts. Current findings suggest a significant role for dTP in auditory recognition memory similar in many respects to the visual nervous system, while delay memory firing patterns are not prominent, which may relate to monkeys' shorter forgetting thresholds for auditory vs. visual objects.
Katharina S. Rufener
Full Text Available Neural oscillations in the gamma range are the dominant rhythmic activation pattern in the human auditory cortex. These gamma oscillations are functionally relevant for the processing of rapidly changing acoustic information in both speech and non-speech sounds. Accordingly, there is a tight link between the temporal resolution ability of the auditory system and inherent neural gamma oscillations. Transcranial random noise stimulation (tRNS has been demonstrated to specifically increase gamma oscillation in the human auditory cortex. However, neither the physiological mechanisms of tRNS nor the behavioral consequences of this intervention are completely understood. In the present study we stimulated the human auditory cortex bilaterally with tRNS while EEG was continuously measured. Modulations in the participants’ temporal and spectral resolution ability were investigated by means of a gap detection task and a pitch discrimination task. Compared to sham, auditory tRNS increased the detection rate for near-threshold stimuli in the temporal domain only, while no such effect was present for the discrimination of spectral features. Behavioral findings were paralleled by reduced peak latencies of the P50 and N1 component of the auditory event-related potentials (ERP indicating an impact on early sensory processing. The facilitating effect of tRNS was limited to the processing of near-threshold stimuli while stimuli clearly below and above the individual perception threshold were not affected by tRNS. This non-linear relationship between the signal-to-noise level of the presented stimuli and the effect of stimulation further qualifies stochastic resonance (SR as the underlying mechanism of tRNS on auditory processing. Our results demonstrate a tRNS related improvement in acoustic perception of time critical auditory information and, thus, provide further indices that auditory tRNS can amplify the resonance frequency of the auditory system.
Diedler, Jennifer; Pietz, Joachim; Brunner, Monika; Hornberger, Cornelia; Bast, Thomas; Rupp, André
We examined basic auditory temporal processing in children with language-based learning problems (LPs) applying magnetencephalography. Auditory-evoked fields of 43 children (27 LP, 16 controls) were recorded while passively listening to 100-ms white noise bursts with temporal gaps of 3, 6, 10 and 30 ms inserted after 5 or 50 ms. The P1m was evaluated by spatio-temporal source analysis. Psychophysical gap-detection thresholds were obtained for the same participants. Thirty-two percent of the LP children were not able to perform the early gap psychoacoustic task. In addition, LP children displayed a significant delay of the P1m during the early gap task. These findings provide evidence for a diminished neuronal representation of short auditory stimuli in the primary auditory cortex of LP children.
Christensen-Dalsgaard, Jakob; Tang, Ye Zhong; Carr, Catherine E
in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (around 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted...... from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al., 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions......Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion...
Processamento auditivo: comparação entre potenciais evocados auditivos de média latência e testes de padrões temporais Auditory processing: comparision between auditory middle latency response and temporal pattern tests
Full Text Available OBJETIVO: verificar a concordância entre os resultados da avaliação do Potencial Evocado Auditivo de Média Latência e testes de padrões temporais. MÉTODOS: foram avaliados 155 sujeitos de ambos os sexos, idade entre sete e 16 anos, com audição periférica normal. Os sujeitos foram submetidos aos testes de Padrão de Frequência e Duração e Potenciais Evocados auditivos de Média Latência. RESULTADOS: os sujeitos foram distribuídos em dois grupos: normal ou alterado para o processamento auditivo. O índice de alteração foi em torno de 30%, exceto para Potencial Evocado Auditivo de Média Latência que foi pouco menor (17,4%. Os padrões de frequência e duração foram concordantes até 12 anos. A partir dos 13 anos, observou-se maior ocorrência de alteração no padrão de frequência que no padrão de duração. Os padrões de frequência e duração (orelhas direita e esquerda e Potencial Evocado Auditivo de Média Latência não foram concordantes. Para 7 e 8 anos a combinação padrão de frequência e duração normal / Média Latência alterado tem maior ocorrência que a combinação padrão de frequência e duração alterada / Média Latência normal. Nas demais idades, ocorreu o contrário. Não houve diferença estatística entre as faixas etárias quanto à distribuição de normal e alterado no padrão de frequência (orelhas direita e esquerda, nem para o Potencial Evocado Auditivo de Média Latência, com exceção do padrão de duração para o grupo de 9 e 10 anos. CONCLUSÃO: não houve concordância entre os resultados do Potencial Evocado Auditivo de Média Latência e os testes de padrões temporais aplicados.PURPOSE: to check the concordance between the Middle Latency Response and temporal processing tests. METHODS: 155 normal hearing subjects of both genders (age group range between 7 to 16 years were evaluated with the Pitch and Duration Pattern Tests (behavioral and Middle Latency Response
Lucker, Jay R.
Many children with problems learning in school can have educational deficits due to underlying auditory processing disorders (APD). For these children, they can be identified as having auditory learning disabilities. Furthermore, auditory learning disabilities is identified as a specific learning disability (SLD) in the IDEA. Educators and…
Christensen-Dalsgaard, Jakob; Tang, Yezhong; Carr, Catherine E
Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼ 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200-500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional.
Christensen-Dalsgaard, Jakob; Tang, Yezhong
Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200–500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional. PMID:21325679
Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed
Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...
Brown, Rachel M; Chen, Joyce L; Hollinger, Avrum; Penhune, Virginia B; Palmer, Caroline; Zatorre, Robert J
Music performance requires control of two sequential structures: the ordering of pitches and the temporal intervals between successive pitches. Whether pitch and temporal structures are processed as separate or integrated features remains unclear. A repetition suppression paradigm compared neural and behavioral correlates of mapping pitch sequences and temporal sequences to motor movements in music performance. Fourteen pianists listened to and performed novel melodies on an MR-compatible piano keyboard during fMRI scanning. The pitch or temporal patterns in the melodies either changed or repeated (remained the same) across consecutive trials. We expected decreased neural response to the patterns (pitch or temporal) that repeated across trials relative to patterns that changed. Pitch and temporal accuracy were high, and pitch accuracy improved when either pitch or temporal sequences repeated over trials. Repetition of either pitch or temporal sequences was associated with linear BOLD decrease in frontal-parietal brain regions including dorsal and ventral premotor cortex, pre-SMA, and superior parietal cortex. Pitch sequence repetition (in contrast to temporal sequence repetition) was associated with linear BOLD decrease in the intraparietal sulcus (IPS) while pianists listened to melodies they were about to perform. Decreased BOLD response in IPS also predicted increase in pitch accuracy only when pitch sequences repeated. Thus, behavioral performance and neural response in sensorimotor mapping networks were sensitive to both pitch and temporal structure, suggesting that pitch and temporal structure are largely integrated in auditory-motor transformations. IPS may be involved in transforming pitch sequences into spatial coordinates for accurate piano performance.
Henry, Kenneth S; Heinz, Michael G
People with sensorineural hearing loss have substantial difficulty understanding speech under degraded listening conditions. Behavioral studies suggest that this difficulty may be caused by changes in auditory processing of the rapidly-varying temporal fine structure (TFS) of acoustic signals. In this paper, we review the presently known effects of sensorineural hearing loss on processing of TFS and slower envelope modulations in the peripheral auditory system of mammals. Cochlear damage has relatively subtle effects on phase locking by auditory-nerve fibers to the temporal structure of narrowband signals under quiet conditions. In background noise, however, sensorineural loss does substantially reduce phase locking to the TFS of pure-tone stimuli. For auditory processing of broadband stimuli, sensorineural hearing loss has been shown to severely alter the neural representation of temporal information along the tonotopic axis of the cochlea. Notably, auditory-nerve fibers innervating the high-frequency part of the cochlea grow increasingly responsive to low-frequency TFS information and less responsive to temporal information near their characteristic frequency (CF). Cochlear damage also increases the correlation of the response to TFS across fibers of varying CF, decreases the traveling-wave delay between TFS responses of fibers with different CFs, and can increase the range of temporal modulation frequencies encoded in the periphery for broadband sounds. Weaker neural coding of temporal structure in background noise and degraded coding of broadband signals along the tonotopic axis of the cochlea are expected to contribute considerably to speech perception problems in people with sensorineural hearing loss. This article is part of a Special Issue entitled "Annual Reviews 2013". Copyright © 2013 Elsevier B.V. All rights reserved.
Resolução temporal de crianças: comparação entre audição normal, perda auditiva condutiva e distúrbio do processamento auditivo Temporal resolution in children: comparing normal hearing, conductive hearing loss and auditory processing disorder
Sheila Andreoli Balen
Full Text Available A resolução temporal é essencial na percepção acústica da fala, podendo estar alterada nos distúrbios auditivos gerando prejuízos no desenvolvimento da linguagem. OBJETIVO: Comparar a resolução temporal de crianças com audição normal, perda auditiva condutiva e distúrbios do processamento auditivo. CASUÍSTICA E MÉTODO: A amostra foi de 31 crianças de 07 a 10 anos, divididas em três grupos: G1: 12 com audição normal, G2: sete com perda auditiva condutiva e G3: 12 com distúrbio do processamento auditivo. Os procedimentos de seleção foram: questionário aos responsáveis, avaliação audiológica e do processamento auditivo. O procedimento de pesquisa foi o teste de detecção de intervalos no silêncio realizado a 50 dB NS acima da média de 500, 1000 e 2000Hz na condição binaural em 500, 1000, 2000 e 4000Hz. Na análise dos dados foi utilizado o Teste de Wilcoxon, com nível de significância de 1%. RESULTADO: Observou-se que houve diferença entre os G1 e G2 e entre os G1 e G3 em todas as freqüências. Por outro lado, esta diferença não foi observada entre os G2 e G3. CONCLUSÃO A perda auditiva condutiva e o distúrbio do processamento auditivo têm influência no limiar de detecção de intervalos.Temporal resolution is essential to speech acoustic perception. It may be altered in subjects with auditory disorders, thus impairing the development of spoken and written language. AIM: The goal was to compare temporal resolution of children with normal hearing, with those bearing conductive hearing loss and auditory processing disorders. MATERIALS AND METHODS: The sample had 31 children, between 7 and 10 years of age, broken down into three groups: G1: 12 subjects with normal hearing; G2: 7 with conductive hearing loss and G3: 12 subjects with auditory processing disorders. This study was clinical and experimental. Selection procedures were: a questionnaire to be answered by the parents/guardians, audiologic and hearing
Koefoed-Nielsen, Birger; Andersen, Svend Erik Søgaard
Over the last decade evidence on the existence of auditory processing disorder (APD) has increased. Therefore, it is now time to deal with the phenomenon in daily clinical work. This article gives information about APD, especially about problems with the definition of APD, diagnosing APD and the treatment.
LANCELOT, CÉLINE; SAMSON, SÉVERINE; AHAD, PIERRE; BAULAC, MICHEL
A bstract : To investigate auditory spatial and nonspatial short‐term memory, a sound location discrimination task and an auditory object discrimination task were used in patients with medial temporal lobe resection...
Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P
Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Tallal, Paula; And Others
Reviews research toward defining the neuropathological mechanisms responsible for developmental dysphasia. Hypothesizes that higher level auditory processing dysfunction may result from more basic temporal processing deficits which interfere with resolution of brief duration stimuli. Suggests two alternative hypotheses regarding the…
King, Wayne M; Lombardino, Linda J; Crandell, Carl C; Leonard, Christiana M
The primary objective of this study was to investigate the extent of comorbid auditory processing disorder (APD) in a group of adults with developmental dyslexia. An additional objective was to compare performance on auditory tasks to results from standardized tests of reading in an attempt to generate a clinically useful profile of developmental dyslexics with comorbid APD. A group of eleven persons with developmental dyslexia and 14 age- and intelligence-matched controls participated in the study. Behavioral audiograms, 226-Hz tympanograms, and word recognition scores were obtained binaurally from all subjects. Both groups were administered the frequency-pattern test (FPT) and duration-pattern test (DPT) monaurally (30 items per ear) in both the left and right ear. Gap detection results were obtained in both groups (binaural presentation) using narrowband noise centered at 1 kHz in an adaptive two-alternative forced-choice (2-AFC) paradigm. The FPT, DPT, and gap detection results were analyzed for interaural (where applicable), intergroup, and intragroup differences. Correlations between performance on the auditory tasks and the standardized tests of reading were examined. Additive logistic regression models were fit to the data to determine which auditory tests proved to be the best predictors of group membership. The persons with developmental dyslexia as a group performed significantly poorer than controls on both the FPT and DPT. Furthermore, the group differences were significant in both monaural conditions. On the FPT and DPT, five of the eleven participants with dyslexia performed below the widely used clinical criterion for APD of 70% correct in either ear. All five of these participants performed below criterion on the FPT, whereas four of the five additionally performed below 70% on the DPT. The data also were analyzed by fitting a series of stepwise logistic regression models, which indicated that gap detection did not significantly predict group
Hurley, L.M.; Hall, I.C.
Context-dependent plasticity in auditory processing is achieved in part by physiological mechanisms that link behavioral state to neural responses to sound. The neuromodulator serotonin has many characteristics suitable for such a role. Serotonergic neurons are extrinsic to the auditory system but send projections to most auditory regions. These projections release serotonin during particular behavioral contexts. Heightened levels of behavioral arousal and specific extrinsic events, including stressful or social events, increase serotonin availability in the auditory system. Although the release of serotonin is likely to be relatively diffuse, highly specific effects of serotonin on auditory neural circuitry are achieved through the localization of serotonergic projections, and through a large array of receptor types that are expressed by specific subsets of auditory neurons. Through this array, serotonin enacts plasticity in auditory processing in multiple ways. Serotonin changes the responses of auditory neurons to input through the alteration of intrinsic and synaptic properties, and alters both short- and long-term forms of plasticity. The infrastructure of the serotonergic system itself is also plastic, responding to age and cochlear trauma. These diverse findings support a view of serotonin as a widespread mechanism for behaviorally relevant plasticity in the regulation of auditory processing. This view also accommodates models of how the same regulatory mechanism can have pathological consequences for auditory processing. PMID:21187135
Full Text Available Abstract Background Due to auditory experience, musicians have better auditory expertise than non-musicians. An increased neocortical activity during auditory oddball stimulation was observed in different studies for musicians and for non-musicians after discrimination training. This suggests a modification of synaptic strength among simultaneously active neurons due to the training. We used amplitude-modulated tones (AM presented in an oddball sequence and manipulated their carrier or modulation frequencies. We investigated non-musicians in order to see if behavioral discrimination training could modify the neocortical activity generated by change detection of AM tone attributes (carrier or modulation frequency. Cortical evoked responses like N1 and mismatch negativity (MMN triggered by sound changes were recorded by a whole head magnetoencephalographic system (MEG. We investigated (i how the auditory cortex reacts to pitch difference (in carrier frequency and changes in temporal features (modulation frequency of AM tones and (ii how discrimination training modulates the neuronal activity reflecting the transient auditory responses generated in the auditory cortex. Results The results showed that, additionally to an improvement of the behavioral discrimination performance, discrimination training of carrier frequency changes significantly modulates the MMN and N1 response amplitudes after the training. This process was accompanied by an attention switch to the deviant stimulus after the training procedure identified by the occurrence of a P3a component. In contrast, the training in discrimination of modulation frequency was not sufficient to improve the behavioral discrimination performance and to alternate the cortical response (MMN to the modulation frequency change. The N1 amplitude, however, showed significant increase after and one week after the training. Similar to the training in carrier frequency discrimination, a long lasting
Full Text Available Pediatric hearing evaluation based on pure tone audiometry does not always reflect how a child hears in everyday life. This practice is inappropriate when evaluating the difficulties children experiencing auditory processing disorder (APD in school or on the playground. Despite the marked increase in research on pediatric APD, there remains limited access to proper evaluation worldwide. This perspective article presents five common misconceptions of APD that contribute to inappropriate or limited management in children experiencing these deficits. The misconceptions discussed are (1 the disorder cannot be diagnosed due to the lack of a gold standard diagnostic test; (2 making generalizations based on profiles of children suspected of APD and not diagnosed with the disorder; (3 it is best to discard an APD diagnosis when another disorder is present; (4 arguing that the known link between auditory perception and higher cognition function precludes the validity of APD as a clinical entity; and (5 APD is not a clinical entity. These five misconceptions are described and rebutted using published data as well as critical thinking on current available knowledge on APD.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
Boscariol M.; Andre K.D.; Feniman M.R.
Many children with auditory processing disorders have a high prevalence of otitis media, a middle ear alterations greatly prevalent in children with palatine and lip clefts. Aim: to check the performance of children with palate cleft alone (PC) in auditory processing tests. Prospective study. Materials and Methods: twenty children (7 to 11 years) with CP were submitted to sound location tests (SL), memory for verbal sounds (MSSV) and non verbal sounds in sequence (MSSNV), Revised auditory fus...
Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.
Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals
Zamm, Anna; Pfordresher, Peter Q; Palmer, Caroline
Many behaviors require that individuals coordinate the timing of their actions with others. The current study investigated the role of two factors in temporal coordination of joint music performance: differences in partners' spontaneous (uncued) rate and auditory feedback generated by oneself and one's partner. Pianists performed melodies independently (in a Solo condition), and with a partner (in a duet condition), either at the same time as a partner (Unison), or at a temporal offset (Round), such that pianists heard their partner produce a serially shifted copy of their own sequence. Access to self-produced auditory information during duet performance was manipulated as well: Performers heard either full auditory feedback (Full), or only feedback from their partner (Other). Larger differences in partners' spontaneous rates of Solo performances were associated with larger asynchronies (less effective synchronization) during duet performance. Auditory feedback also influenced temporal coordination of duet performance: Pianists were more coordinated (smaller tone onset asynchronies and more mutual adaptation) during duet performances when self-generated auditory feedback aligned with partner-generated feedback (Unison) than when it did not (Round). Removal of self-feedback disrupted coordination (larger tone onset asynchronies) during Round performances only. Together, findings suggest that differences in partners' spontaneous rates of Solo performances, as well as differences in self- and partner-generated auditory feedback, influence temporal coordination of joint sensorimotor behaviors.
Vlaskamp, Chantal|info:eu-repo/dai/nl/413985679; Oranje, Bob|info:eu-repo/dai/nl/217177409; Madsen, Gitte Falcher; Møllegaard Jepsen, Jens Richardt; Durston, Sarah|info:eu-repo/dai/nl/243083912; Cantio, Cathriona; Glenthøj, Birte; Bilenberg, Niels
Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism are
Miller, Carol A.
Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…
Tru E Kwong
Full Text Available Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes. Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Kwong, Tru E; Brachman, Kyle J
Relations among linguistic auditory processing, nonlinguistic auditory processing, spelling ability, and spelling strategy choice were examined. Sixty-three undergraduate students completed measures of auditory processing (one involving distinguishing similar tones, one involving distinguishing similar phonemes, and one involving selecting appropriate spellings for individual phonemes). Participants also completed a modified version of a standardized spelling test, and a secondary spelling test with retrospective strategy reports. Once testing was completed, participants were divided into phonological versus nonphonological spellers on the basis of the number of words they spelled using phonological strategies only. Results indicated a) moderate to strong positive correlations among the different auditory processing tasks in terms of reaction time, but not accuracy levels, and b) weak to moderate positive correlations between measures of linguistic auditory processing (phoneme distinction and phoneme spelling choice in the presence of foils) and spelling ability for phonological spellers, but not for nonphonological spellers. These results suggest a possible explanation for past contradictory research on auditory processing and spelling, which has been divided in terms of whether or not disabled spellers seemed to have poorer auditory processing than did typically developing spellers, and suggest implications for teaching spelling to children with good versus poor auditory processing abilities.
Bender, Stephan; Bluschke, Annet; Dippel, Gabriel; Rupp, André; Weisbrod, Matthias; Thomas, Christine
To investigate whether automatic auditory post-processing is deficient in patients with Alzheimer's disease and is related to sensory gating. Event-related potentials were recorded during a passive listening task to examine the automatic transient storage of auditory information (short click pairs). Patients with Alzheimer's disease were compared to a healthy age-matched control group. A young healthy control group was included to assess effects of physiological aging. A bilateral frontal negativity in combination with deep temporal positivity occurring 500 ms after stimulus offset was reduced in patients with Alzheimer's disease, but was unaffected by physiological aging. Its amplitude correlated with short-term memory capacity, but was independent of sensory gating in healthy elderly controls. Source analysis revealed a dipole pair in the anterior temporal lobes. Results suggest that auditory post-processing is deficient in Alzheimer's disease, but is not typically related to sensory gating. The deficit could neither be explained by physiological aging nor by problems in earlier stages of auditory perception. Correlations with short-term memory capacity and executive control tasks suggested an association with memory encoding and/or overall cognitive control deficits. An auditory late negative wave could represent a marker of auditory working memory encoding deficits in Alzheimer's disease. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Favrot, Sylvain Emmanuel
A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control...... of the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been developed...
Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima
Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits.
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Brink, D. van den; Brown, C.M.; Hagoort, P.
An event-related brain potential experiment was carried out to investigate the temporal relationship between lexical selection and the semantic integration in auditory sentence processing. Participants were presented with spoken sentences that ended with a word that was either semantically congruent
Boh, Bastiaan; Herholz, Sibylle C; Lappe, Claudia; Pantev, Christo
In the present study we investigated the capacity of the memory store underlying the mismatch negativity (MMN) response in musicians and nonmusicians for complex tone patterns. While previous studies have focused either on the kind of information that can be encoded or on the decay of the memory trace over time, we studied capacity in terms of the length of tone sequences, i.e., the number of individual tones that can be fully encoded and maintained. By means of magnetoencephalography (MEG) we recorded MMN responses to deviant tones that could occur at any position of standard tone patterns composed of four, six or eight tones during passive, distracted listening. Whereas there was a reliable MMN response to deviant tones in the four-tone pattern in both musicians and nonmusicians, only some individuals showed MMN responses to the longer patterns. This finding of a reliable capacity of the short-term auditory store underlying the MMN response is in line with estimates of a three to five item capacity of the short-term memory trace from behavioural studies, although pitch and contour complexity covaried with sequence length, which might have led to an understatement of the reported capacity. Whereas there was a tendency for an enhancement of the pattern MMN in musicians compared to nonmusicians, a strong advantage for musicians could be shown in an accompanying behavioural task of detecting the deviants while attending to the stimuli for all pattern lengths, indicating that long-term musical training differentially affects the memory capacity of auditory short-term memory for complex tone patterns with and without attention. Also, a left-hemispheric lateralization of MMN responses in the six-tone pattern suggests that additional networks that help structuring the patterns in the temporal domain might be recruited for demanding auditory processing in the pitch domain.
Bruin, N.M.W.J. de; Luijtelaar, E.L.J.M. van; Cools, A.R.; Ellenbroek, B.A.
RATIONALE: Auditory filtering disturbances, as measured in the sensory gating and prepulse inhibition (PPI) paradigms, have been linked to aberrant auditory information processing and sensory overload in schizophrenic patients. In both paradigms, the response to the second stimulus (S2) is
Bruin, N.M.W.J. de; Luijtelaar, E.L.J.M. van; Cools, A.R.; Ellenbroek, B.A.
Rationale: Auditory filtering disturbances, as measured in the sensory gating and prepulse inhibition (PPI) paradigms, have been linked to aberrant auditory information processing and sensory overload in schizophrenic patients. In both paradigms, the response to the second stimulus (S2) is
de Wit, Ellen; Visser-Bochane, Margot I.; Steenbergen, Bert; van Dijk, Pim; van der Schans, Cees P.; Luinge, Margreet R.
Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit specific to the auditory modality or as a…
Bailey, Frank S.; Yocum, Russell G.
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Stevenson, Ryan A; Park, Sohee; Cochran, Channing; McIntosh, Lindsey G; Noel, Jean-Paul; Barense, Morgan D; Ferber, Susanne; Wallace, Mark T
Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation. Copyright © 2016 Elsevier B.V. All rights reserved.
Scheidt, Ryan E; Kale, Sushrut; Heinz, Michael G
Auditory-nerve fibers demonstrate dynamic response properties in that they adapt to rapid changes in sound level, both at the onset and offset of a sound. These dynamic response properties affect temporal coding of stimulus modulations that are perceptually relevant for many sounds such as speech and music. Temporal dynamics have been well characterized in auditory-nerve fibers from normal-hearing animals, but little is known about the effects of sensorineural hearing loss on these dynamics. This study examined the effects of noise-induced hearing loss on the temporal dynamics in auditory-nerve fiber responses from anesthetized chinchillas. Post-stimulus-time histograms were computed from responses to 50-ms tones presented at characteristic frequency and 30 dB above fiber threshold. Several response metrics related to temporal dynamics were computed from post-stimulus-time histograms and were compared between normal-hearing and noise-exposed animals. Results indicate that noise-exposed auditory-nerve fibers show significantly reduced response latency, increased onset response and percent adaptation, faster adaptation after onset, and slower recovery after offset. The decrease in response latency only occurred in noise-exposed fibers with significantly reduced frequency selectivity. These changes in temporal dynamics have important implications for temporal envelope coding in hearing-impaired ears, as well as for the design of dynamic compression algorithms for hearing aids.
Frissen, Ilja; Ziat, Mounia; Campion, Gianni; Hayward, Vincent; Guastavino, Catherine
In two experiments we investigated the effects of voluntary movements on temporal haptic perception. Measures of sensitivity (JND) and temporal alignment (PSS) were obtained from temporal order judgments made on intermodal auditory-haptic (Experiment 1) or intramodal haptic (Experiment 2) stimulus pairs under three movement conditions. In the baseline, static condition, the arm of the participants remained stationary. In the passive condition, the arm was displaced by a servo-controlled motorized device. In the active condition, the participants moved voluntarily. The auditory stimulus was a short, 500Hz tone presented over headphones and the haptic stimulus was a brief suprathreshold force pulse applied to the tip of the index finger orthogonally to the finger movement. Active movement did not significantly affect discrimination sensitivity on the auditory-haptic stimulus pairs, whereas it significantly improved sensitivity in the case of the haptic stimulus pair, demonstrating a key role for motor command information in temporal sensitivity in the haptic system. Points of subjective simultaneity were by-and-large coincident with physical simultaneity, with one striking exception in the passive condition with the auditory-haptic stimulus pair. In the latter case, the haptic stimulus had to be presented 45ms before the auditory stimulus in order to obtain subjective simultaneity. A model is proposed to explain the discrimination performance. Copyright © 2012 Elsevier B.V. All rights reserved.
Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Foss-Feig, Jennifer H; Schauder, Kimberly B; Key, Alexandra P; Wallace, Mark T; Stone, Wendy L
Sensory processing alterations are highly prevalent in autism spectrum disorder (ASD). Neurobiologically-based theories of ASD propose that abnormalities in the processing of temporal aspects of sensory input could underlie core symptoms of ASD. For example, rapid auditory temporal processing is critical for speech perception, and language difficulties are central to the social communication deficits defining the disorder. This study assessed visual and auditory temporal processing abilities and tested their relation to core ASD symptoms. 53 children (26 ASD, 27 TD) completed visual and auditory psychophysical gap detection tasks to measure gap detection thresholds (i.e., the minimum interval between sequential stimuli needed for individuals to perceive an interruption between the stimuli) in each domain. Children were also administered standardized language assessments such that the relation between individual differences in auditory gap detection thresholds and degree of language and communication difficulties among children with ASD could be assessed. Children with ASD had substantially higher auditory gap detection thresholds compared to children with TD, and auditory gap detection thresholds were correlated significantly with several measures of language processing in this population. No group differences were observed in the visual temporal processing. Results indicate a domain-specific impairment in rapid auditory temporal processing in ASD that is associated with greater difficulties in language processing. Findings provide qualified support for temporal processing theories of ASD and highlight the need for future research testing the nature, extent, and universality of auditory temporal processing deficits in this population. Autism Res 2017, 10: 1845-1856. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Sensory symptoms are common in ASD. Temporal processing alterations are often implicated, but understudied. The ability to
Kühnis, Jürg; Elmer, Stefan; Meyer, Martin; Jäncke, Lutz
Here, we applied a multi-feature mismatch negativity (MMN) paradigm in order to systematically investigate the neuronal representation of vowels and temporally manipulated CV syllables in a homogeneous sample of string players and non-musicians. Based on previous work indicating an increased sensitivity of the musicians' auditory system, we expected to find that musically trained subjects will elicit increased MMN amplitudes in response to temporal variations in CV syllables, namely voice-onset time (VOT) and duration. In addition, since different vowels are principally distinguished by means of frequency information and musicians are superior in extracting tonal (and thus frequency) information from an acoustic stream, we also expected to provide evidence for an increased auditory representation of vowels in the experts. In line with our hypothesis, we could show that musicians are not only advantaged in the pre-attentive encoding of temporal speech cues, but most notably also in processing vowels. Additional "just noticeable difference" measurements suggested that the musicians' perceptual advantage in encoding speech sounds was more likely driven by the generic constitutional properties of a highly trained auditory system, rather than by its specialisation for speech representations per se. These results shed light on the origin of the often reported advantage of musicians in processing a variety of speech sounds. Copyright © 2013 Elsevier Ltd. All rights reserved.
Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto
Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.
Full Text Available Several studies using visual objects defined by luminance have reported that the auditory event must be presented 30 to 40 ms after the visual stimulus to perceive audiovisual synchrony. In the present study, we used visual objects defined only by their binocular disparity. We measured the optimal latency between visual and auditory stimuli for the perception of synchrony using a method introduced by Moutoussis & Zeki (1997. Visual stimuli were defined either by luminance and disparity or by disparity only. They moved either back and forth between 6 and 12 arcmin or from left to right at a constant disparity of 9 arcmin. This visual modulation was presented together with an amplitude-modulated 500 Hz tone. Both modulations were sinusoidal (frequency: 0.7 Hz. We found no difference between 2D and 3D motion for luminance stimuli: a 40 ms auditory lag was necessary for perceived synchrony. Surprisingly, even though stereopsis is often thought to be slow, we found a similar optimal latency in the disparity 3D motion condition (55 ms. However, when participants had to judge simultaneity for disparity 2D motion stimuli, it led to larger latencies (170 ms, suggesting that stereo motion detectors are poorly suited to track 2D motion.
Kenneth Stuart Henry
Full Text Available While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL, the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20-30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1-2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli.
Koravand, Amineh; Jutras, Benoit
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed
Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.
Adam, Ruth; Noppeney, Uta
Objects in our natural environment generate signals in multiple sensory modalities. This fMRI study investigated the influence of prior task-irrelevant auditory information on visually-evoked category-selective activations in the ventral occipito-temporal cortex. Subjects categorized pictures as landmarks or animal faces, while ignoring the preceding congruent or incongruent sound. Behaviorally, subjects responded slower to incongruent than congruent stimuli. At the neural level, the lateral and medial prefrontal cortices showed increased activations for incongruent relative to congruent stimuli consistent with their role in response selection. In contrast, the parahippocampal gyri combined visual and auditory information additively: activation was greater for visual landmarks than animal faces and landmark-related sounds than animal vocalizations resulting in increased parahippocampal selectivity for congruent audiovisual landmarks. Effective connectivity analyses showed that this amplification of visual landmark-selectivity was mediated by increased negative coupling of the parahippocampal gyrus with the superior temporal sulcus for congruent stimuli. Thus, task-irrelevant auditory information influences visual object categorization at two stages. In the ventral occipito-temporal cortex auditory and visual category information are combined additively to sharpen visual category-selective responses. In the left inferior frontal sulcus, as indexed by a significant incongruency effect, visual and auditory category information are integrated interactively for response selection. Copyright 2010 Elsevier Inc. All rights reserved.
Processing and Sound Localization Temporal precision of neural firing is also involved in binaural processing and localization of sound in space. The...posttraumatic stress disorder, Quick- SIN = Quick Speech-In-Noise, SD = standard deviation, SNR = signal-to-noise ratio, SPL = sound pressure level...to assess several important and potentially vulnerable aspects of auditory processing of complex sounds . These functions include the precise cod
Alexandra P. Key
Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.
Poelmans, Hanne; Luts, Heleen; Vandermosten, Maaike; Boets, Bart; Ghesquière, Pol; Wouters, Jan
Speech intelligibility is strongly influenced by the ability to process temporal modulations. It is hypothesized that in dyslexia, deficient processing of rapidly changing auditory information underlies a deficient development of phonological representations, causing reading and spelling problems. Low-frequency modulations between 4 and 20 Hz correspond to the processing rate of important phonological segments (syllables and phonemes, respectively) in speech and therefore provide a bridge between low-level auditory and phonological processing. In the present study, temporal modulation processing was investigated by auditory steady state responses (ASSRs) in normal-reading and dyslexic adults. Multichannel ASSRs were recorded in normal-reading and dyslexic adults in response to speech-weighted noise stimuli amplitude modulated at 80, 20, and 4 Hz. The 80 Hz modulation is known to be primarily generated by the brainstem, whereas the 20 and 4 Hz modulations are mainly generated in the cortex. Furthermore, the 20 and 4 Hz modulations provide an objective auditory performance measure related to phonemic- and syllabic-rate processing. In addition to neurophysiological measures, psychophysical tests of speech-in-noise perception and phonological awareness were assessed. On the basis of response strength and phase coherence measures, normal-reading and dyslexic participants showed similar processing at the brainstem level. At the cortical level of the auditory system, dyslexic subjects demonstrated deviant phonemic-rate responses compared with normal readers, whereas no group differences were found for the syllabic rate. Furthermore, a relationship between phonemic-rate ASSRs and psychophysical tests of speech-in-noise perception and phonological awareness was obtained. The results suggest reduced cortical processing for phonemic-rate modulations in dyslexic adults, presumably resulting in limited integration of temporal information in the dorsal phonological pathway.
Steinbrink, Claudia; Zimmer, Karin; Lachmann, Thomas; Dirichs, Martin; Kammer, Thomas
In a longitudinal study, auditory and visual temporal order thresholds (TOTs) were investigated in primary school children (N = 236; mean age at first data point = 6;7) at the beginning of Grade 1 and the end of Grade 2 to test whether rapid temporal processing abilities predict reading and spelling at the end of Grades 1 and 2. Auditory and…
Georgiev, Dejan; Jahanshahi, Marjan; Dreo, Jurij; Čuš, Anja; Pirtošek, Zvezdan; Repovš, Grega
Parkinson's disease (PD) patients show signs of cognitive impairment, such as executive dysfunction, working memory problems and attentional disturbances, even in the early stages of the disease. Though motor symptoms of the disease are often successfully addressed by dopaminergic medication, it still remains unclear, how dopaminergic therapy affects cognitive function. The main objective of this study was to assess the effect of dopaminergic medication on visual and auditory attentional processing. 14 PD patients and 13 matched healthy controls performed a three-stimulus auditory and visual oddball task while their EEG was recorded. The patients performed the task twice, once on- and once off-medication. While the results showed no significant differences between PD patients and controls, they did reveal a significant increase in P3 amplitude on- vs. off-medication specific to processing of auditory distractors and no other stimuli. These results indicate significant effect of dopaminergic therapy on processing of distracting auditory stimuli. With a lack of between group differences the effect could reflect either 1) improved recruitment of attentional resources to auditory distractors; 2) reduced ability for cognitive inhibition of auditory distractors; 3) increased response to distractor stimuli resulting in impaired cognitive performance; or 4) hindered ability to discriminate between auditory distractors and targets. Further studies are needed to differentiate between these possibilities. Copyright © 2015 Elsevier B.V. All rights reserved.
Lucker, Jay R
Many audiologists believe that auditory processing testing must be carried out in a soundproof booth. This expectation is especially a problem in places such as elementary schools. Research comparing pure-tone thresholds obtained in sound booths compared to quiet test environments outside of these booths does not support that belief. Auditory processing testing is generally carried out at above threshold levels, and therefore may be even less likely to require a soundproof booth. The present study was carried out to compare test results in soundproof booths versus quiet rooms. The purpose of this study was to determine whether auditory processing tests can be administered in a quiet test room rather than in the soundproof test suite. The outcomes would identify that audiologists can provide auditory processing testing for children under various test conditions including quiet rooms at their school. A battery of auditory processing tests was administered at a test level equivalent to 50 dB HL through headphones. The same equipment was used for testing in both locations. Twenty participants identified with normal hearing were included in this study, ten having no auditory processing concerns and ten exhibiting auditory processing problems. All participants underwent a battery of tests, both inside the test booth and outside the booth in a quiet room. Order of testing (inside versus outside) was counterbalanced. Participants were first determined to have normal hearing thresholds for tones and speech. Auditory processing tests were recorded and presented from an HP EliteBook laptop computer with noise-canceling headphones attached to a y-cord that not only presented the test stimuli to the participants but also allowed monitor headphones to be worn by the evaluator. The same equipment was used inside as well as outside the booth. No differences were found for each auditory processing measure as a function of the test setting or the order in which testing was done
Full Text Available We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. Participants read some sentences with specific delay times of DAF (0, 30, 75, 120 ms during three minutes to induce ‘Lag Adaptation’. After the adaptation, they then judged the simultaneity between motor sensation and vocal sound given feedback in producing simple voice but not speech. We found that speech production with lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.
Carroll, Christine A.; Boggs, Jennifer; O'Donnell, Brian F.; Shekhar, Anantha; Hetrick, William P.
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the…
Laurent, Agathe; Arzimanoglou, Alexis; Panagiotakaki, Eleni; Sfaello, Ignacio; Kahane, Philippe; Ryvlin, Philippe; Hirsch, Edouard; de Schonen, Scania
A high rate of abnormal social behavioural traits or perceptual deficits is observed in children with unilateral temporal lobe epilepsy. In the present study, perception of auditory and visual social signals, carried by faces and voices, was evaluated in children or adolescents with temporal lobe epilepsy. We prospectively investigated a sample of 62 children with focal non-idiopathic epilepsy early in the course of the disorder. The present analysis included 39 children with a confirmed diagnosis of temporal lobe epilepsy. Control participants (72), distributed across 10 age groups, served as a control group. Our socio-perceptual evaluation protocol comprised three socio-visual tasks (face identity, facial emotion and gaze direction recognition), two socio-auditory tasks (voice identity and emotional prosody recognition), and three control tasks (lip reading, geometrical pattern and linguistic intonation recognition). All 39 patients also benefited from a neuropsychological examination. As a group, children with temporal lobe epilepsy performed at a significantly lower level compared to the control group with regards to recognition of facial identity, direction of eye gaze, and emotional facial expressions. We found no relationship between the type of visual deficit and age at first seizure, duration of epilepsy, or the epilepsy-affected cerebral hemisphere. Deficits in socio-perceptual tasks could be found independently of the presence of deficits in visual or auditory episodic memory, visual non-facial pattern processing (control tasks), or speech perception. A normal FSIQ did not exempt some of the patients from an underlying deficit in some of the socio-perceptual tasks. Temporal lobe epilepsy not only impairs development of emotion recognition, but can also impair development of perception of other socio-perceptual signals in children with or without intellectual deficiency. Prospective studies need to be designed to evaluate the results of appropriate re
Soares, Aparecido José Couto
Full Text Available Introduction: Presently, it is admitted that individuals with reading and writing alterations may present delay in the development of listening skills, which may interfere in the learning process. The assessment of the listening skills can occur in a behavioral way, through central auditory processing (CAP tests, or by electrophysiological assessment highlighting the long latency auditory evoked potentials (LLAEP. The use of the LLAEP as a means of complementary assessment of individuals with reading and writing alterations can become an important data both for further characterization of the alterations, as for the therapeutic guidance of this population. Objective: Characterize the CAP and the LLAEP in children with reading and writing alterations. Method: Research approved by the Institution's Ethic Commission under nº 305/10. The assessment of CAP and LLAEP was performed in 12 children aged between 8 and 12 years old (average of 10,6 years, with reading and writing alterations confirmed in specific evaluation. Results: The most altered CAP skills were temporal ordination and figure-ground for linguistic sounds. There were found altered results in P300 and in MMN. Conclusion: The individuals with reading and writing alterations performed below the expected on CAP tests. The MMN allowed a better characterization of the auditory function of this population. There was evidence of association between the CAP results and the alteration of the LLAEP.
Mathias, Brian; Gehring, William J; Palmer, Caroline
The current study investigated the relationship between planning processes and feedback monitoring during music performance, a complex task in which performers prepare upcoming events while monitoring their sensory outcomes. Theories of action planning in auditory-motor production tasks propose that the planning of future events co-occurs with the perception of auditory feedback. This study investigated the neural correlates of planning and feedback monitoring by manipulating the contents of auditory feedback during music performance. Pianists memorized and performed melodies at a cued tempo in a synchronization-continuation task while the EEG was recorded. During performance, auditory feedback associated with single melody tones was occasionally substituted with tones corresponding to future (next), present (current), or past (previous) melody tones. Only future-oriented altered feedback disrupted behavior: Future-oriented feedback caused pianists to slow down on the subsequent tone more than past-oriented feedback, and amplitudes of the auditory N1 potential elicited by the tone immediately following the altered feedback were larger for future-oriented than for past-oriented or noncontextual (unrelated) altered feedback; larger N1 amplitudes were associated with greater slowing following altered feedback in the future condition only. Feedback-related negativities were elicited in all altered feedback conditions. In sum, behavioral and neural evidence suggests that future-oriented feedback disrupts performance more than past-oriented feedback, consistent with planning theories that posit similarity-based interference between feedback and planning contents. Neural sensory processing of auditory feedback, reflected in the N1 ERP, may serve as a marker for temporal disruption caused by altered auditory feedback in auditory-motor production tasks. © 2016 Society for Psychophysiological Research.
Georgiou, George K; Papadopoulos, Timothy C; Zarouna, Elena; Parrila, Rauno
The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological awareness, orthographic processing, short-term memory, rapid automatized naming, auditory and visual processing, and reading fluency to 21 Grade 6 children with dyslexia, 21 chronological age-matched controls and 20 Grade 3 reading age-matched controls. The results indicated that the children with dyslexia did not experience auditory processing deficits, but about half of them showed visual processing deficits. Both orthographic processing and rapid automatized naming deficits were associated with dyslexia in our sample, but it is less clear that they were associated with visual processing deficits. Copyright © 2012 John Wiley & Sons, Ltd.
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. Copyright © 2015 Elsevier Ltd. All rights reserved.
Caroline Nunes Rocha-Muniz
Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.
reflection delays and enhances the test reflection for large delays. Employing a 200-ms-long broadband noise burst as input signal, the critical delay separating these two binaural phenomena was found to be 7–10 ms. It was suggested that the critical delay refers to a temporal window that is employed......, resulting in a critical delay of about 2–3 ms for 20-ms-long stimuli. Hence, for very short stimuli the temporal window or critical delay exhibits values similar to the auditory temporal resolution as, for instance, observed in gap-detection tasks. It is suggested that the larger critical delay observed...
Full Text Available In the premature infant, somatosensory and visual stimuli trigger an immature electroencephalographic (EEG pattern, "delta-brushes," in the corresponding sensory cortical areas. Whether auditory stimuli evoke delta-brushes in the premature auditory cortex has not been reported. Here, responses to auditory stimuli were studied in 46 premature infants without neurologic risk aged 31 to 38 postmenstrual weeks (PMW during routine EEG recording. Stimuli consisted of either low-volume technogenic "clicks" near the background noise level of the neonatal care unit, or a human voice at conversational sound level. Stimuli were administrated pseudo-randomly during quiet and active sleep. In another protocol, the cortical response to a composite stimulus ("click" and voice was manually triggered during EEG hypoactive periods of quiet sleep. Cortical responses were analyzed by event detection, power frequency analysis and stimulus locked averaging. Before 34 PMW, both voice and "click" stimuli evoked cortical responses with similar frequency-power topographic characteristics, namely a temporal negative slow-wave and rapid oscillations similar to spontaneous delta-brushes. Responses to composite stimuli also showed a maximal frequency-power increase in temporal areas before 35 PMW. From 34 PMW the topography of responses in quiet sleep was different for "click" and voice stimuli: responses to "clicks" became diffuse but responses to voice remained limited to temporal areas. After the age of 35 PMW auditory evoked delta-brushes progressively disappeared and were replaced by a low amplitude response in the same location. Our data show that auditory stimuli mimicking ambient sounds efficiently evoke delta-brushes in temporal areas in the premature infant before 35 PMW. Along with findings in other sensory modalities (visual and somatosensory, these findings suggest that sensory driven delta-brushes represent a ubiquitous feature of the human sensory cortex
Jung, JeYoung; Kim, Sunmi; Cho, Hyesuk; Nam, Kichun
This study aims to provide convergent understanding of the neural basis of auditory word processing efficiency using a multimodal imaging. We investigated the structural and functional correlates of word processing efficiency in healthy individuals. We acquired two structural imaging (T1-weighted imaging and diffusion tensor imaging) and functional magnetic resonance imaging (fMRI) during auditory word processing (phonological and semantic tasks). Our results showed that better phonological performance was predicted by the greater thalamus activity. In contrary, better semantic performance was associated with the less activation in the left posterior middle temporal gyrus (pMTG), supporting the neural efficiency hypothesis that better task performance requires less brain activation. Furthermore, our network analysis revealed the semantic network including the left anterior temporal lobe (ATL), dorsolateral prefrontal cortex (DLPFC) and pMTG was correlated with the semantic efficiency. Especially, this network acted as a neural efficient manner during auditory word processing. Structurally, DLPFC and cingulum contributed to the word processing efficiency. Also, the parietal cortex showed a significate association with the word processing efficiency. Our results demonstrated that two features of word processing efficiency, phonology and semantics, can be supported in different brain regions and, importantly, the way serving it in each region was different according to the feature of word processing. Our findings suggest that word processing efficiency can be achieved by in collaboration of multiple brain regions involved in language and general cognitive function structurally and functionally.
Wang, Grace I; Delgutte, Bertrand
The spatio-temporal pattern of auditory nerve (AN) activity, representing the relative timing of spikes across the tonotopic axis, contains cues to perceptual features of sounds such as pitch, loudness, timbre, and spatial location. These spatio-temporal cues may be extracted by neurons in the cochlear nucleus (CN) that are sensitive to relative timing of inputs from AN fibers innervating different cochlear regions. One possible mechanism for this extraction is "cross-frequency" coincidence detection (CD), in which a central neuron converts the degree of coincidence across the tonotopic axis into a rate code by preferentially firing when its AN inputs discharge in synchrony. We used Huffman stimuli (Carney LH. J Neurophysiol 64: 437-456, 1990), which have a flat power spectrum but differ in their phase spectra, to systematically manipulate relative timing of spikes across tonotopically neighboring AN fibers without changing overall firing rates. We compared responses of CN units to Huffman stimuli with responses of model CD cells operating on spatio-temporal patterns of AN activity derived from measured responses of AN fibers with the principle of cochlear scaling invariance. We used the maximum likelihood method to determine the CD model cell parameters most likely to produce the measured CN unit responses, and thereby could distinguish units behaving like cross-frequency CD cells from those consistent with same-frequency CD (in which all inputs would originate from the same tonotopic location). We find that certain CN unit types, especially those associated with globular bushy cells, have responses consistent with cross-frequency CD cells. A possible functional role of a cross-frequency CD mechanism in these CN units is to increase the dynamic range of binaural neurons that process cues for sound localization.
Edwards, Erik; Chang, Edward F.
Given recent interest in syllabic rates (~2-5 Hz) for speech processing, we review the perception of “fluctuation” range (~1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (~2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. PMID:24035819
Dias, Karin Ziliotto; Jutras, Benoît; Acrani, Isabela Olszanski; Pereira, Liliane Desgualdo
The aim of the present study was to assess the auditory temporal resolution ability in individuals with central auditory processing disorders, to examine the maturation effect and to investigate the relationship between the performance on a temporal resolution test with the performance on other central auditory tests. Participants were divided in two groups: 131 with Central Auditory Processing Disorder and 94 with normal auditory processing. They had pure-tone air-conduction thresholds no poorer than 15 dB HL bilaterally, normal admittance measures and presence of acoustic reflexes. Also, they were assessed with a central auditory test battery. Participants who failed at least one or more tests were included in the Central Auditory Processing Disorder group and those in the control group obtained normal performance on all tests. Following the auditory processing assessment, the Random Gap Detection Test was administered to the participants. A three-way ANOVA was performed. Correlation analyses were also done between the four Random Gap Detection Test subtests data as well as between Random Gap Detection Test data and the other auditory processing test results. There was a significant difference between the age-group performances in children with and without Central Auditory Processing Disorder. Also, 48% of children with Central Auditory Processing Disorder failed the Random Gap Detection Test and the percentage decreased as a function of age. The highest percentage (86%) was found in the 5-6 year-old children. Furthermore, results revealed a strong significant correlation between the four Random Gap Detection Test subtests. There was a modest correlation between the Random Gap Detection Test results and the dichotic listening tests. No significant correlation was observed between the Random Gap Detection Test data and the results of the other tests in the battery. Random Gap Detection Test should not be administered to children younger than 7 years old because
Bao, Yan; Szymaszek, Aneta; Wang, Xiaoying; Oron, Anna; Pöppel, Ernst; Szelag, Elzbieta
The close relationship between temporal perception and speech processing is well established. The present study focused on the specific question whether the speech environment could influence temporal order perception in subjects whose language backgrounds are distinctively different, i.e., Chinese (tonal language) vs. Polish (non-tonal language). Temporal order thresholds were measured for both monaurally presented clicks and binaurally presented tone pairs. Whereas the click experiment showed similar order thresholds for the two language groups, the experiment with tone pairs resulted in different observations: while Chinese demonstrated better performance in discriminating the temporal order of two "close frequency" tone pairs (600 Hz and 1200 Hz), Polish subjects showed a reversed pattern, i.e., better performance for "distant frequency" tone pairs (400 Hz and 3000 Hz). These results indicate on the one hand a common temporal mechanism for perceiving the order of two monaurally presented stimuli, and on the other hand neuronal plasticity for perceiving the order of frequency-related auditory stimuli. We conclude that the auditory brain is modified with respect to temporal processing by long-term exposure to a tonal or a non-tonal language. As a consequence of such an exposure different cognitive modes of operation (analytic vs. holistic) are selected: the analytic mode is adopted for "distant frequency" tone pairs in Chinese and for "close frequency" tone pairs in Polish subjects, whereas the holistic mode is selected for "close frequency" tone pairs in Chinese and for "distant frequency" tone pairs in Polish subjects, reflecting a double dissociation of function. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
Ivone, Ferreira Neves; Schochat, Eliane
Auditory processing maturation in school children with and without learning difficulties. To verify response improvement with the increase in age of the auditory processing skills in school children with ages ranging from eight to ten years, with and without learning difficulties and to perform a comparative study. Eighty-nine children without learning complaints (Group 1) and 60 children with learning difficulties (Group II) were assessed. The used auditory processing tests were: Pediatric Speech Intelligibility (PSI), Speech in Noise, Dichotic Non-Verbal (DNV) and Staggered Spondaic Word (SSW). A better performance was observed for Group I between the ages of eight and ten in all of the used tests. However, the observed differences were statistically significant only for PSI and SSW. For Group II, a better performance was also observed with the increase in age, with statistically significant differences for all of the used tests. Comparing the results between Groups I and II, a better performance was verified for children with no learning difficulties, in the three age groups, in PSI, DNV and SSW. A statistically significant improvement was verified in the responses of the auditory processing with the increase in age, for the ages between eight and ten years, in children with and without learning difficulties. In the comparative study, it was verified that children with learning difficulties presented a lower performance in all of the used tests in the three age groups. This suggests, for this group, a delay in the maturation of the auditory processing skills.
Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko
In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Population-wide inter-spike interval distributions are constructed by summing together intervals from the observed responses of many single Type I auditory nerve fibers. Features in such distributions correspond closely with pitches that are heard by human listeners. The most common all-order interval present in the auditory nerve array almost invariably corresponds to the pitch frequency, whereas the relative fraction of pitchrelated intervals amongst all others qualitatively corresponds to the strength of the pitch. Consequently, many diverse aspects of pitch perception are explained in terms of such temporal representations. Similar stimulus-driven temporal discharge patterns are observed in major neuronal populations of the cochlear nucleus. Population-interval distributions constitute an alternative time-domain strategy for representing sensory information that complements spatially organized sensory maps. Similar autocorrelation-like representations are possible in other sensory systems, in which neural discharges are time-locked to stimulus waveforms.
Tun, Patricia A; Williams, Victoria A; Small, Brent J; Hafter, Ervin R
To briefly summarize existing data on effects of aging on auditory processing and cognition. A narrative review summarized previously reported data on age-related changes in auditory processing and in cognitive processes with a focus on spoken language comprehension and memory. In addition, recent data on effects of lifestyle engagement on cognitive processes are reviewed. There is substantial evidence for age-related declines in both auditory processes and cognitive abilities. Accumulating evidence supports the idea that the perceptual burden associated with hearing loss impacts the processing resources available for good comprehension and memory for spoken language, particularly in older adults with limited resources. However, many language abilities are well preserved in old age, and there is considerable variability among individuals in cognitive performance across the life span. The authors discuss how lifestyle factors and socioemotional engagement can help to offset declining abilities. It is clear that spoken language processing in adulthood and old age is affected by changes in perceptual, cognitive, and socioemotional processes as well as by interactions among these changes. Recommendations for further research include studying speech comprehension in complex conditions, including meaningful-connection spoken language, and tailoring clinical interventions based on patients' auditory processing and cognitive abilities along with their individual socioemotional demands.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speechtherapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30, 2002 compared with the new auditory processing test done on May 13, 2003, after one year of therapy directed to acoustic stimulation of auditory abilities disorders, in a...
O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C
The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. Copyright © 2015 the authors 0270-6474/15/357256-08$15.00/0.
Bicak, Mehmet M. A.
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
Niels Chr. eHansen
Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr; Pearce, Marcus T
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr.; Pearce, Marcus T.
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty—a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Jahshan, Carol; Wynn, Jonathan K; Green, Michael F
Patients with schizophrenia have well-established deficits in their ability to identify emotion from facial expression and tone of voice. In the visual modality, there is strong evidence that basic processing deficits contribute to impaired facial affect recognition in schizophrenia. However, few studies have examined the auditory modality for mechanisms underlying affective prosody identification. In this study, we explored links between different stages of auditory processing, using event-related potentials (ERPs), and affective prosody detection in schizophrenia. Thirty-six schizophrenia patients and 18 healthy control subjects received tasks of affective prosody, facial emotion identification, and tone matching, as well as two auditory oddball paradigms, one passive for mismatch negativity (MMN) and one active for P300. Patients had significantly reduced MMN and P300 amplitudes, impaired auditory and visual emotion recognition, and poorer tone matching performance, relative to healthy controls. Correlations between ERP and behavioral measures within the patient group revealed significant associations between affective prosody recognition and both MMN and P300 amplitudes. These relationships were modality specific, as MMN and P300 did not correlate with facial emotion recognition. The two ERP waves accounted for 49% of the variance in affective prosody in a regression analysis. Our results support previous suggestions of a relationship between basic auditory processing abnormalities and affective prosody dysfunction in schizophrenia, and indicate that both relatively automatic pre-attentive processes (MMN) and later attention-dependent processes (P300) are involved with accurate auditory emotion identification. These findings provide support for bottom-up (e.g., perceptually based) cognitive remediation approaches. Published by Elsevier B.V.
Saija, Jefta D; Başkent, Deniz; Andringa, Tjeerd C; Akyürek, Elkan G
As people age, they tend to integrate successive visual stimuli over longer intervals than younger adults. It may be expected that temporal integration is affected similarly in other modalities, possibly due to general, age-related cognitive slowing of the brain. However, the previous literature does not provide convincing evidence that this is the case in audition. One hypothesis is that the primacy of time in audition attenuates the degree to which temporal integration in that modality extends over time as a function of age. We sought to settle this issue by comparing visual and auditory temporal integration in younger and older adults directly, achieved by minimizing task differences between modalities. Participants were presented with a visual or an auditory rapid serial presentation task, at 40-100 ms/item. In both tasks, two subsequent targets were to be identified. Critically, these could be perceptually integrated and reported by the participants as such, providing a direct measure of temporal integration. In both tasks, older participants integrated more than younger adults, especially when stimuli were presented across longer time intervals. This difference was more pronounced in vision and only marginally significant in audition. We conclude that temporal integration increases with age in both modalities, but that this change might be slightly less pronounced in audition.
Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.
Hickok, G.; Okada, K.; Barr, W.; Pa, J.; Rogalsky, C.; Donnelly, K.; Barde, L.; Grant, A.
Data from lesion studies suggest that the ability to perceive speech sounds, as measured by auditory comprehension tasks, is supported by temporal lobe systems in both the left and right hemisphere. For example, patients with left temporal lobe damage and auditory comprehension deficits (i.e., Wernicke's aphasics), nonetheless comprehend isolated…
Mourad, Mona; Hassan, Mona; El-Banna, Manal; Asal, Samir; Hamza, Yasmeen
A deficit in the processing of auditory information may underlie problems in understanding speech in the presence of background noise, degraded speech, and in following spoken instructions. Children with auditory processing disorders are challenged in the classroom because of ambient noise levels and maybe at risk for learning disabilities. 1) Set up and execute screening protocol for auditory processing performance (APP) in primary school children. 2) Construct database for APP in the classroom. 3) Set critical limits for deviant performance. Our hypothesis is that screening for APP in the classroom identifies pupils at risk for auditory processing disorders. Study consisted of two phases. Phase 1: 2,015 pupils were selected from fourth-, fifth-, and sixth-graders using stratified random sampling with the proportional allocation method. Male and female students were equally represented. Otoscopic examination, screening audiometery, and screening tests for auditory processing (AP) abilities (Pitch Pattern Sequence Test [PPST], speech perception in noise [SPIN] right, SPIN left, and Dichotic Digit Test) were conducted. A questionnaire emphasizing auditory listening behaviors (ALB) was answered by classroom teacher. Phase 2 included 69 pupils who were randomly selected based on percentile scores of phase 1. Students were examined for the corresponding full version AP tests in addition to Auditory Fusion Test-Revised and masking level difference. Intelligence quotient and learning disabilities were evaluated. Phase 1: Results are displayed in frequency polygons for10th, 25th, 50th, 75th, and 90th percentiles score for each AP test. Fourth-graders scored significantly lower than fifth- and sixth-graders on all tests. Males scored lower than females on PPST. A composite score was calculated to represent a summed score performance for PPST, SPIN right ear, SPIN left ear, and Dichotic Digit Test. Scores Auditory Fusion Test-Revised mean thresholds were statistically
Full Text Available Research on auditory verbal hallucinations (AVHs indicates that AVH schizophrenia patients show greater abnormalities on tasks requiring recognition of affective prosody (AP than non-AVH patients. Detecting AP requires accurate perception of manipulations in pitch, amplitude and duration. Schizophrenia patients with AVHs also experience difficulty detecting these acoustic manipulations; with a number of theorists speculating that difficulties in pitch, amplitude and duration discrimination underlie AP abnormalities. This study examined whether both AP and these aspects of auditory processing are also impaired in first degree relatives of persons with AVHs. It also examined whether pitch, amplitude and duration discrimination were related to AP, and to hallucination proneness. Unaffected relatives of AVH schizophrenia patients (N=19 and matched healthy controls (N=33 were compared using tone discrimination tasks, an AP task, and clinical measures. Relatives were slower at identifying emotions on the AP task (p =.002, with secondary analysis showing this was especially so for happy (p = .014 and neutral (p =.001 sentences. There was a significant interaction effect for pitch between tone deviation level and group (p = .019, and relatives performed worse than controls on amplitude discrimination and duration discrimination. AP performance for happy and neutral sentences was significantly correlated with amplitude perception. Lastly, AVH proneness in the entire sample was significantly correlated with pitch discrimination (r = .44 and pitch perception was shown to predict AVH proneness in the sample (p = .005. These results suggest basic impairments in auditory processing are present in relatives of AVH patients; they potentially underlie processing speed in AP tasks, and predict AVH proneness. This indicates auditory processing deficits may be a core feature of AVHs in schizophrenia, and are worthy of further study as a potential endophenotype for
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in
Grose, John H; Mamo, Sara K
The purpose of this study was to determine whether the processing of temporal fine structure diminishes with age, even in the presence of relatively normal audiometric hearing. Temporal fine structure processing was assessed by measuring the discrimination of interaural phase differences (IPDs). The hypothesis was that IPD discrimination is more acute in middle-aged observers than in older observers but that acuity in middle-aged observers is nevertheless poorer than in young adults. Two experiments were undertaken. The first measured discrimination of 0- and π-radian interaural phases as a function of carrier frequency. The stimulus was a 5-Hz sinusoidally amplitude-modulated tone in which, in the signal waveform, the interaural phase of the carrier was inverted during alternate modulation periods. The second experiment measured IPD discrimination at fixed frequencies. The stimulus was a pair of tone pulses in which, in the signal, the trailing pulse contained an IPD. A total of 39 adults with normal audiograms ≤2000 Hz participated in this study, of which 15 were younger, 12 middle aged, and 12 older. Experiment 1 showed that the highest carrier frequency at which a π-radian IPD could be discriminated from the diotic, 0-radian standard was significantly lower in middle-aged listeners than young adults, and still lower in older listeners. Experiment 2 indicated that middle-aged listeners were less sensitive to IPDs than young adults at all but the lowest frequencies tested. Older listeners, as a group, had the poorest thresholds. These results suggest that deficits in temporal fine structure processing are evident in the presenescent auditory system. This adds to the accumulating evidence that deficiencies in some aspects of auditory temporal processing emerge relatively early in the aging process. It is possible that early-emerging temporal processing deficits manifest themselves in challenging speech in noise environments.
Wit, E. de; Visser-Bochane, M.I.; Steenbergen, B.; Dijk, P. van; Schans, C.P. van der; Luinge, M.R.
PURPOSE: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit
de Wit, Ellen; Visser-Bochane, Margot I.; Steenbergen, Bert; van Dijk, Pim; Schans, van der Cees P.; Luinge, Margreet R.
Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit
Wit, E. de; Visser-Bochane, M.I.; Steenbergen, B.; Dijk, P. van; Schans, C.P. van der; Luinge, M.R.
Purpose: The purpose of this review article is to describe characteristics of auditory processing disorders (APD) by evaluating the literature in which children with suspected or diagnosed APD were compared with typically developing children and to determine whether APD must be regarded as a deficit
Elliott, Emily M.; Bhagat, Shaum P.; Lynn, Sharon D.
This study investigated the effects of irrelevant sounds on the serial recall performance of visually presented digits in a sample of children diagnosed with (central) auditory processing disorders [(C)APD] and age- and span-matched control groups. The irrelevant sounds used were samples of tones and speech. Memory performance was significantly…
Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten
Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...
Vandewalle, Ellen; Boets, Bart; Ghesquiere, Pol; Zink, Inge
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay…
van der Steen, M C Marieke; Jacoby, Nori; Fairhurst, Merle T; Keller, Peter E
The current study investigated the human ability to synchronize movements with event sequences containing continuous tempo changes. This capacity is evident, for example, in ensemble musicians who maintain precise interpersonal coordination while modulating the performance tempo for expressive purposes. Here we tested an ADaptation and Anticipation Model (ADAM) that was developed to account for such behavior by combining error correction processes (adaptation) with a predictive temporal extrapolation process (anticipation). While previous computational models of synchronization incorporate error correction, they do not account for prediction during tempo-changing behavior. The fit between behavioral data and computer simulations based on four versions of ADAM was assessed. These versions included a model with adaptation only, one in which adaptation and anticipation act in combination (error correction is applied on the basis of predicted tempo changes), and two models in which adaptation and anticipation were linked in a joint module that corrects for predicted discrepancies between the outcomes of adaptive and anticipatory processes. The behavioral experiment required participants to tap their finger in time with three auditory pacing sequences containing tempo changes that differed in the rate of change and the number of turning points. Behavioral results indicated that sensorimotor synchronization accuracy and precision, while generally high, decreased with increases in the rate of tempo change and number of turning points. Simulations and model-based parameter estimates showed that adaptation mechanisms alone could not fully explain the observed precision of sensorimotor synchronization. Including anticipation in the model increased the precision of simulated sensorimotor synchronization and improved the fit of model to behavioral data, especially when adaptation and anticipation mechanisms were linked via a joint module based on the notion of joint internal
Vandewalle, Ellen; Boets, Bart; Ghesquière, Pol; Zink, Inge
This longitudinal study investigated temporal auditory processing (frequency modulation and between-channel gap detection) and speech perception (speech-in-noise and categorical perception) in three groups of 6 years 3 months to 6 years 8 months-old children attending grade 1: (1) children with specific language impairment (SLI) and literacy delay (n = 8), (2) children with SLI and normal literacy (n = 10) and (3) typically developing children (n = 14). Moreover, the relations between these auditory processing and speech perception skills and oral language and literacy skills in grade 1 and grade 3 were analyzed. The SLI group with literacy delay scored significantly lower than both other groups on speech perception, but not on temporal auditory processing. Both normal reading groups did not differ in terms of speech perception or auditory processing. Speech perception was significantly related to reading and spelling in grades 1 and 3 and had a unique predictive contribution to reading growth in grade 3, even after controlling reading level, phonological ability, auditory processing and oral language skills in grade 1. These findings indicated that speech perception also had a unique direct impact upon reading development and not only through its relation with phonological awareness. Moreover, speech perception seemed to be more associated with the development of literacy skills and less with oral language ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Engineer, C T; Centanni, T M; Im, K W; Borland, M S; Moreno, N A; Carraway, R S; Wilson, L G; Kilgard, M P
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits in communication observed in individuals with autism. In this study, we document auditory cortex responses in rats prenatally exposed to VPA. We recorded local field potentials and multiunit responses to speech sounds in primary auditory cortex, anterior auditory field, ventral auditory field. and posterior auditory field in VPA exposed and control rats. Prenatal VPA exposure severely degrades the precise spatiotemporal patterns evoked by speech sounds in secondary, but not primary auditory cortex. This result parallels findings in humans and suggests that secondary auditory fields may be more sensitive to environmental disturbances and may provide insight into possible mechanisms related to auditory deficits in individuals with autism. © 2014 Wiley Periodicals, Inc.
Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...
Roslyn Holly Fitch
Full Text Available Most researchers in the field of neural plasticity are familiar with the Kennard Principle," which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood. As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents aspects of human sensory processing that may correlate – both developmentally and functionally – with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic injuries (similar to those seen in premature infants and term infants with birth complications led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human "term," but only transient deficits (undetectable in adulthood when induced in a "preterm" window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations. Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in
Fitch, R Holy; Alexander, Michelle L; Threlkeld, Steven W
Most researchers in the field of neural plasticity are familiar with the "Kennard Principle," which purports a positive relationship between age at brain injury and severity of subsequent deficits (plateauing in adulthood). As an example, a child with left hemispherectomy can recover seemingly normal language, while an adult with focal injury to sub-regions of left temporal and/or frontal cortex can suffer dramatic and permanent language loss. Here we present data regarding the impact of early brain injury in rat models as a function of type and timing, measuring long-term behavioral outcomes via auditory discrimination tasks varying in temporal demand. These tasks were created to model (in rodents) aspects of human sensory processing that may correlate-both developmentally and functionally-with typical and atypical language. We found that bilateral focal lesions to the cortical plate in rats during active neuronal migration led to worse auditory outcomes than comparable lesions induced after cortical migration was complete. Conversely, unilateral hypoxic-ischemic (HI) injuries (similar to those seen in premature infants and term infants with birth complications) led to permanent auditory processing deficits when induced at a neurodevelopmental point comparable to human "term," but only transient deficits (undetectable in adulthood) when induced in a "preterm" window. Convergent evidence suggests that regardless of when or how disruption of early neural development occurs, the consequences may be particularly deleterious to rapid auditory processing (RAP) outcomes when they trigger developmental alterations that extend into subcortical structures (i.e., lower sensory processing stations). Collective findings hold implications for the study of behavioral outcomes following early brain injury as well as genetic/environmental disruption, and are relevant to our understanding of the neurologic risk factors underlying developmental language disability in human
Wilson, Wayne J; Arnott, Wendy; Henning, Caroline
To systematically review the peer-reviewed literature on electrophysiological outcomes following auditory training (AT) in school-age children with (central) auditory processing disorder ([C]APD). A systematic review. Searches of 16 electronic databases yielded four studies involving school-aged children whose auditory processing deficits had been confirmed in a manner consistent with ASHA (2005) and AAA (2010) and compared to a treated and/or an untreated control group before and after AT. A further three studies were identified with one lacking a control group and two measuring auditory processing in a manner not consistent with ASHA (2005) and AAA (2010). There is limited evidence that AT leads to measurable electrophysiological changes in children with auditory processing deficits. The evidence base is too small and weak to provide clear guidance on the use of electrophysiological outcomes as a measure of AT outcomes in children with auditory processing problems. The currently limited data can only be used to suggest that click-evoked AMLR and tone-burst evoked auditory P300 might be more likely to detect such outcomes in children diagnosed with (C)APD, and that speech-evoked ALLR might be more likely to detect phonological processing changes in children without a specific diagnosis of (C)APD.
Full Text Available When we actively interact with the environment, it is crucial that we perceive a precise temporal relationship between our own actions and sensory effects to guide our body movements.Thus, we hypothesized that voluntary movements improve perceptual sensitivity to the temporal disparity between auditory and movement-related somatosensory events compared to when they are delivered passively to sensory receptors. In the voluntary condition, participants voluntarily tapped a button, and a noise burst was presented at various onset asynchronies relative to the button press. The participants made either 'sound-first' or 'touch-first' responses. We found that the performance of temporal order judgment (TOJ in the voluntary condition (as indexed by the just noticeable difference was significantly better (M=42.5 ms ±3.8 s.e.m than that when their finger was passively stimulated (passive condition: M=66.8 ms ±6.3 s.e.m. We further examined whether the performance improvement with voluntary action can be attributed to the prediction of the timing of the stimulation from sensory cues (sensory-based prediction, kinesthetic cues contained in voluntary action, and/or to the prediction of stimulation timing from the efference copy of the motor command (motor-based prediction. When the participant’s finger was moved passively to press the button (involuntary condition and when three noise bursts were presented before the target burst with regular intervals (predictable condition, the TOJ performance was not improved from that in the passive condition. These results suggest that the improvement in sensitivity to temporal disparity between somatosensory and auditory events caused by the voluntary action cannot be attributed to sensory-based prediction and kinesthetic cues. Rather, the prediction from the efference copy of the motor command would be crucial for improving the temporal sensitivity.
Tallal, Paula; Gaab, Nadine
Children with language-learning impairments (LLI) form a heterogeneous population with the majority having both spoken and written language deficits as well as sensorimotor deficits, specifically those related to dynamic processing. Research has focused on whether or not sensorimotor deficits, specifically auditory spectrotemporal processing deficits, cause phonological deficit, leading to language and reading impairments. New trends aimed at resolving this question include prospective longitudinal studies of genetically at-risk infants, electrophysiological and neuroimaging studies, and studies aimed at evaluating the effects of auditory training (including musical training) on brain organization for language. Better understanding of the origins of developmental LLI will advance our understanding of the neurobiological mechanisms underlying individual differences in language development and lead to more effective educational and intervention strategies. This review is part of the INMED/TINS special issue "Nature and nurture in brain development and neurological disorders", based on presentations at the annual INMED/TINS symposium (http://inmednet.com/).
Alijani, Babak; Bagheri, Hamid Reza; Chabok, Shahrokh Yousefzadeh; Behzadnia, Hamid; Dehghani, Siavash
Temporal bone meningoencephalic herniation may occur in head trauma. It is a rare condition with potentially dangerous complications. Several different routes for temporal bone meningoencephalocele have been proposed. An11-year-old boy with history of head trauma initially presented with a 9-months history of progressive right-sided hearing loss and facial weakness. The other complaint was formation of a cystic mass in the right external auditory canal. The patient underwent surgery via a mini middle cranial fossa craniotomy associated with a transmastoid approach. Although presenting symptoms can be subtle, early suspicion and confirmatory imaging aid in establishing the diagnosis. The combination of computed tomography and magnetic resonance imaging will help in proper preoperative diagnosis. The operation includes transmastoid, middle cranial fossa repair, or combination of both. The multilayer closure of bony defect is very important to avoid cerebrospinal fluid leak. Clinical manifestations, diagnosis, and surgical approaches for posttraumatic meningoencephaloceles arising in the head and neck region are briefly discussed.
Nishihara, Makoto; Inui, Koji; Morita, Tomoyo; Kodaira, Minori; Mochizuki, Hideki; Otsuru, Naofumi; Motomura, Eishi; Ushida, Takahiro; Kakigi, Ryusuke
Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG). The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m). The latency of Off-P50m depended on the inter-stimulus interval (ISI) of the click train, which was the longest at 40 ms (25 Hz) and became shorter with shorter ISIs (2.5∼20 ms). The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms. PMID:25170608
Full Text Available Previous studies showed that the amplitude and latency of the auditory offset cortical response depended on the history of the sound, which implicated the involvement of echoic memory in shaping a response. When a brief sound was repeated, the latency of the offset response depended precisely on the frequency of the repeat, indicating that the brain recognized the timing of the offset by using information on the repeat frequency stored in memory. In the present study, we investigated the temporal resolution of sensory storage by measuring auditory offset responses with magnetoencephalography (MEG. The offset of a train of clicks for 1 s elicited a clear magnetic response at approximately 60 ms (Off-P50m. The latency of Off-P50m depended on the inter-stimulus interval (ISI of the click train, which was the longest at 40 ms (25 Hz and became shorter with shorter ISIs (2.5∼20 ms. The correlation coefficient r2 for the peak latency and ISI was as high as 0.99, which suggested that sensory storage for the stimulation frequency accurately determined the Off-P50m latency. Statistical analysis revealed that the latency of all pairs, except for that between 200 and 400 Hz, was significantly different, indicating the very high temporal resolution of sensory storage at approximately 5 ms.
Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.
Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.
Moser, Dana; Baker, Julie M; Sanchez, Carmen E; Rorden, Chris; Fridriksson, Julius
Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits. Although the right posterior inferior parietal lobe has been implicated in temporal order judgments (TOJs) of visual information, there is limited evidence to support the role of the left inferior parietal lobe (IPL) in processing syllable order. The purpose of this study was to examine whether the left inferior parietal lobe is recruited during temporal order judgments of speech stimuli. Functional magnetic resonance imaging data were collected on 14 normal participants while they completed the following forced-choice tasks: (1) syllable order of multisyllabic pseudowords, (2) syllable identification of single syllables, and (3) gender identification of both multisyllabic and monosyllabic speech stimuli. Results revealed increased neural recruitment in the left inferior parietal lobe when participants made judgments about syllable order compared with both syllable identification and gender identification. These findings suggest that the left inferior parietal lobe plays an important role in processing syllable order and support the hypothesized role of this region as an interface between auditory speech and the articulatory code. Furthermore, a breakdown in this interface may explain some components of the speech deficits observed after posterior damage to the left hemisphere.
Leslie D Kwakye
Full Text Available Autism spectrum disorders (ASD form a continuum of neurodevelopmental disorders characterized by deficits in communication and reciprocal social interaction, repetitive behaviors, and restricted interests. Sensory disturbances are also frequently reported in clinical and autobiographical accounts. However, few empirical studies have characterized the fundamental features of sensory and multisensory processing in ASD. Recently published studies have shown that children with ASD are able to integrate low-level multisensory stimuli, but do so over an enlarged temporal window when compared with typically developing (TD children. The current study sought to expand upon these previous findings by examining differences in the temporal processing of low-level multisensory stimuli in high-functioning (HFA and low-functioning (LFA children with ASD in the context of a simple reaction time task. Contrary to these previous findings, children with both HFA and LFA showed smaller gains in performance under multisensory (ie, combined visual-auditory conditions when compared with their TD peers. Additionally, the pattern of performance gains as a function of SOA was similar across groups, suggesting similarities in the temporal processing of these cues that run counter to previous studies that have shown an enlarged “temporal window.” These findings add complexity to our understanding of the multisensory processing of low-level stimuli in ASD and may hold promise for the development of more sensitive diagnostic measures and improved remediation strategies in autism.
Kell, Christian A; Darquea, Maritza; Behrens, Marion; Cordani, Lorenzo; Keller, Christian; Fuchs, Susanne
Phonetic detail and lateralization of inner speech during covert sentence reading as well as overt reading in 32 right-handed healthy participants undergoing 3T fMRI were investigated. The number of voiceless and voiced consonants in the processed sentences was systematically varied. Participants listened to sentences, read them covertly, silently mouthed them while reading, and read them overtly. Condition comparisons allowed for the study of effects of externally versus self-generated auditory input and of somatosensory feedback related to or independent of voicing. In every condition, increased voicing modulated bilateral voice-selective regions in the superior temporal sulcus without any lateralization. The enhanced temporal modulation and/or higher spectral frequencies of sentences rich in voiceless consonants induced left-lateralized activation of phonological regions in the posterior temporal lobe, regardless of condition. These results provide evidence that inner speech during reading codes detail as fine as consonant voicing. Our findings suggest that the fronto-temporal internal loops underlying inner speech target different temporal regions. These regions differ in their sensitivity to inner or overt acoustic speech features. More slowly varying acoustic parameters are represented more anteriorly and bilaterally in the temporal lobe while quickly changing acoustic features are processed in more posterior left temporal cortices. Furthermore, processing of external auditory feedback during overt sentence reading was sensitive to consonant voicing only in the left superior temporal cortex. Voicing did not modulate left-lateralized processing of somatosensory feedback during articulation or bilateral motor processing. This suggests voicing is primarily monitored in the auditory rather than in the somatosensory feedback channel. Hum Brain Mapp 38:493-508, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Peña, José Luis
The owl's auditory system computes interaural time (ITD) and interaural level (ILD) differences to create a two-dimensional map of auditory space. Space-specific neurons are selective for combinations of ITD and ILD, which define, respectively, the horizontal and vertical dimensions of their receptive fields. ITD curves for postsynaptic potentials indicate that ICx neurons integrate the results of binaural cross correlation in different frequency bands. However, the difference between the main and side peaks is slight. ICx neurons further enhance this difference in the process of converting membrane potentials to impulse rates. Comparison of subthreshold postsynaptic potentials (PSPs) and spike output for the same neurons showed that receptive fields measured in PSPs were much larger than those measured in spikes in both ITD and ILD dimensions. A multiplication of separate postsynaptic potentials tuned to ITD and ILD can account for the combination sensitivity of these neurons to ITD-ILD pairs.
Full Text Available Complex higher-order cognitive functions and their possible changes with aging are mandatory objectives of cognitive neuroscience. Event-related potentials (ERPs allow investigators to probe the earliest stages of information processing. N100, Mismatch negativity (MMN and P3a are auditory ERP components that reflect automatic sensory discrimination. The aim of the present study was to determine if N100, MMN and P3a parameters are stable in healthy aged subjects, compared to those of normal young adults. Normal young adults and older participants were assessed using standardized cognitive functional instruments and their ERPs were obtained with an auditory stimulation at two different interstimulus intervals, during a passive paradigm. All individuals were within the normal range on cognitive tests. No significant differences were found for any ERP parameters obtained from the two age groups. This study shows that aging is characterized by a stability of the auditory discrimination and novelty processing. This is important for the arrangement of normative for the detection of subtle preclinical changes due to abnormal brain aging.
Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Multisensory integration is one of the essential features of perception. Though the processing of spatial information is an important clue to understand its mechanisms, a complete knowledge cannot be achieved without taking into account the processing of temporal information. Simultaneity judgments (SJs) and temporal order judgments (TOJs) are the two most widely used procedures for explicit estimation of temporal relations between sensory stimuli. Behavioral studies suggest that both tasks recruit different sets of cognitive operations. On the other hand, empirical evidence related to their neuronal underpinnings is still scarce, especially with regard to multisensory stimulation. The aim of the current fMRI study was to explore neural correlates of both tasks using paradigm with audiovisual stimuli. Fifteen subjects performed TOJ and SJ tasks grouped in 18-second blocks. Subjects were asked to estimate onset synchrony or temporal order of onsets of non-semantic auditory and visual stimuli. Common areas of activation elicited by both tasks were found in the bilateral fronto-parietal network, including regions whose activity can be also observed in tasks involving spatial selective attention. This can be regarded as an evidence for the hypothesis that tasks involving selection based on temporal information engage the similar regions as the attentional tasks based on spatial information. The direct contrast between the SJ task and the TOJ task did not reveal any regions showing stronger activity for SJ task than in TOJ task. The reverse contrast revealed a number of left hemisphere regions which were more active during the TOJ task than the SJ task. They were found in the prefrontal cortex, the parietal lobules (superior and inferior) and in the occipito-temporal regions. These results suggest that the TOJ task requires recruitment of additional cognitive operations in comparison to SJ task. They are probably associated with forming representations of stimuli as
Robert J Ellis
Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.
Brown, Stephen B.R.E.
This dissertation explores the involvement of the locus-coeruleus-noradrenaline (LC-NE) system in both temporal attention and uncertainty processing. To this end, a number of cognitive tasks are used (Stroop, passive viewing, attentional blink, accessory stimulus, auditory oddball) and a number of
Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.
Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R
The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100 cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000 ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000 ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80 ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20 ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. Copyright © 2016 Elsevier B.V. All rights reserved.
David E Jenson
Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519
Pena, Jose L; DeBello, William M
The human brain has accumulated many useful building blocks over its evolutionary history, and the best knowledge of these has often derived from experiments performed in animal species that display finely honed abilities. In this article we review a model system at the forefront of investigation into the neural bases of information processing, plasticity, and learning: the barn owl auditory localization pathway. In addition to the broadly applicable principles gleaned from three decades of work in this system, there are good reasons to believe that continued exploration of the owl brain will be invaluable for further advances in understanding of how neuronal networks give rise to behavior.
Barniv, Dana; Nelken, Israel
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.
Full Text Available When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones, or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams". Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.
Barniv, Dana; Nelken, Israel
When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774
Liégeois-Chauvel, C; de Graaf, J B; Laguitton, V; Chauvel, P
...). Natural voiced /ba/, /da/, /ga/) and voiceless (/pa/, /ta/, /ka/) syllables, spoken by a native French speaker, were used to study the processing of a specific temporally based acoustico-phonetic feature, the voice onset time (VOT...
Kowalewski, Borys; MacDonald, Ewen; Strelcyk, Olaf
Most state-of-the-art hearing aids apply multi-channel dynamic-range compression (DRC). Such designs have the potential to emulate, at least to some degree, the processing that takes place in the healthy auditory system. One way to assess hearing-aid performance is to measure speech intelligibility....... However, due to the complexity of speech and its robustness to spectral and temporal alterations, the effects of DRC on speech perception have been mixed and controversial. The goal of the present study was to obtain a clearer understanding of the interplay between hearing loss and DRC by means....... Outcomes were simulated using the auditory processing model of Jepsen et al. (2008) with the front end modified to include effects of hearing impairment and DRC. The results were compared to experimental data from normal-hearing and hearing-impaired listeners....
Purpose: This article outlines the author's conceptualization of the key mechanisms that are engaged in the processing of spoken language, referred to as the spoken language processing model. The act of processing what is heard is very complex and involves the successful intertwining of auditory, cognitive, and language mechanisms. Spoken language…
Meredith, M. Alex; Allman, Brian L.
The recent findings in several species that primary auditory cortex processes non-auditory information have largely overlooked the possibility for somatosensory effects. Therefore, the present investigation examined the core auditory cortices (anterior – AAF, and primary auditory-- A1, fields) for tactile responsivity. Multiple single-unit recordings from anesthetized ferret cortex yielded histologically verified neurons (n=311) tested with electronically controlled auditory, visual and tactile stimuli and their combinations. Of the auditory neurons tested, a small proportion (17%) was influenced by visual cues, but a somewhat larger number (23%) was affected by tactile stimulation. Tactile effects rarely occurred alone and spiking responses were observed in bimodal auditory-tactile neurons. However, the broadest tactile effect that was observed, which occurred in all neuron types, was that of suppression of the response to a concurrent auditory cue. The presence of tactile effects in core auditory cortices was supported by a substantial anatomical projection from the rostral suprasylvian sulcal somatosensory area. Collectively, these results demonstrate that crossmodal effects in auditory cortex are not exclusively visual and that somatosensation plays a significant role in modulation of acoustic processing and indicate that crossmodal plasticity following deafness may unmask these existing non-auditory functions. PMID:25728185
Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent
Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.
Green, David B; Mattingly, Michelle M; Ye, Yi; Gay, Jennifer D; Rosen, Merri J
In childhood, partial hearing loss can produce prolonged deficits in speech perception and temporal processing. However, early therapeutic interventions targeting temporal processing may improve later speech-related outcomes. Gap detection is a measure of auditory temporal resolution that relies on the auditory cortex (ACx), and early auditory deprivation alters intrinsic and synaptic properties in the ACx. Thus, early deprivation should induce deficits in gap detection, which should be reflected in ACx gap sensitivity. We tested whether earplugging-induced, early transient auditory deprivation in male and female Mongolian gerbils caused correlated deficits in behavioral and cortical gap detection, and whether these could be rescued by a novel therapeutic approach: brief exposure to gaps in background noise. Two weeks after earplug removal, animals that had been earplugged from hearing onset throughout auditory critical periods displayed impaired behavioral gap detection thresholds (GDTs), but this deficit was fully reversed by three 1 h sessions of exposure to gaps in noise. In parallel, after earplugging, cortical GDTs increased because fewer cells were sensitive to short gaps, and gap exposure normalized this pattern. Furthermore, in deprived animals, both first-spike latency and first-spike latency jitter increased, while spontaneous and evoked firing rates decreased, suggesting that deprivation causes a wider range of perceptual problems than measured here. These cortical changes all returned to control levels after gap exposure. Thus, brief stimulus exposure, perhaps in a salient context such as the unfamiliar placement into a testing apparatus, rescued impaired gap detection and may have potential as a remediation tool for general auditory processing deficits.SIGNIFICANCE STATEMENT Hearing loss in early childhood leads to impairments in auditory perception and language processing that can last well beyond the restoration of hearing sensitivity. Perceptual
Computational maps are of central importance to the brain's representation of the outside world. The question of how maps are formed during ontogenetic development is a subject of intense research (Hubel & Wiesel, Proc R Soc B 198:1, 1977; Buonomano & Merzenich, Annu Rev Neurosci 21:149, 1998). The development in the primary visual cortex is in principle well explained compared to that in the auditory system, partly because the mechanisms underlying the formation of temporal-feature maps are hardly understood (Carr, Annu Rev Neurosci 16:223, 1993). Through a modelling study based on computer simulations in a system of spiking neurons a solution is offered to the problem of how a map of interaural time differences is set up in the nucleus laminaris of the barn owl, as a typical example. An array of neurons is able to represent interaural time differences in an orderly manner, viz., a map, if homosynaptic spike-based Hebbian learning (Gerstner et al, Nature 383:76, 1996; Kempter et al, Phys Rev E 59:4498, 1999) is combined with a presynaptic propagation of synaptic modifications (Fitzsimonds & Poo, Physiol Rev 78:143, 1998). The latter may be orders of magnitude weaker than the former. The algorithm is a key mechanism to the formation of temporal-feature maps on a submillisecond time scale.
Anthony J. Rissling
Full Text Available Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ patients, scalp-channel measures of averaged event-related potentials (ERPs mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42 and non-psychiatric comparison subjects (n = 47 participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard tone ERP difference complex, here termed the auditory deviance response (ADR, comprised of a mid-frontal mismatch negativity (MMN, P3a positivity, and re-orienting negativity (RON peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics.
Przybylski, Lauranne; Bedoin, Nathalie; Krifi-Papoz, Sonia; Herbillon, Vania; Roch, Didier; Léculier, Laure; Kotz, Sonja A; Tillmann, Barbara
Children with developmental language disorders have been shown to be impaired not only in language processing (including syntax), but also in rhythm and meter perception. Our study tested the influence of external rhythmic auditory stimulation (i.e., musical rhythm) on syntax processing in children with specific language impairment (SLI; Experiment 1A) and dyslexia (Experiment 1B). Children listened to either regular or irregular musical prime sequences followed by blocks of grammatically correct and incorrect sentences. They were required to perform grammaticality judgments for each auditorily presented sentence. Performance of all children (SLI, dyslexia, and controls) in the grammaticality judgments was better after regular prime sequences than after irregular prime sequences, as shown by d' data. The benefit of the regular prime was stronger for SLI children (partial η2 = .34) than for dyslexic children (partial η2 = .14), who reached higher performance levels. Together with previous findings on deficits in temporal processing and sequencing, as well as with the recent proposition of a temporal sampling (oscillatory) framework for developmental language disorders (U. A. Goswami, 2011, Temporal sampling framework for developmental dyslexia, Trends in Cognitive Sciences, Vol. 15, pp. 3-10), our results point to potential avenues in using rhythmic structures (even in nonverbal materials) to boost linguistic structure processing.
Full Text Available Visual inputs can distort auditory perception, and accurate auditory processing requires the ability to detect and ignore visual input that is simultaneous and incongruent with auditory information. However, the neural basis of this auditory selection from audiovisual information is unknown, whereas integration process of audiovisual inputs is intensively researched. Here, we tested the hypothesis that the inferior frontal gyrus (IFG and superior temporal sulcus (STS are involved in top-down and bottom-up processing, respectively, of target auditory information from audiovisual inputs. We recorded high gamma activity (HGA, which is associated with neuronal firing in local brain regions, using electrocorticography while patients with epilepsy judged the syllable spoken by a voice while looking at a voice-congruent or -incongruent lip movement from the speaker. The STS exhibited stronger HGA if the patient was presented with information of large audiovisual incongruence than of small incongruence, especially if the auditory information was correctly identified. On the other hand, the IFG exhibited stronger HGA in trials with small audiovisual incongruence when patients correctly perceived the auditory information than when patients incorrectly perceived the auditory information due to the mismatched visual information. These results indicate that the IFG and STS have dissociated roles in selective auditory processing, and suggest that the neural basis of selective auditory processing changes dynamically in accordance with the degree of incongruity between auditory and visual information.
Fatima T Husain
Full Text Available We investigated the impact of hearing loss on emotional processing using task- and rest-based functional magnetic resonance imaging. Two age-matched groups of middle-aged participants were recruited: one with bilateral high-frequency hearing loss (HL and a control group with normal hearing (NH. During the task-based portion of the experiment, participants were instructed to rate affective stimuli from the International Affective Digital Sounds database as pleasant, unpleasant, or neutral. In the resting state experiment, participants were told to fixate on a '+' sign on a screen for five minutes. The results of both the task-based and resting state studies suggest that NH and HL patients differ in their emotional response. Specifically, in the task-based study, we found slower response to affective but not neutral sounds by the HL group compared to the NH group. This was reflected in the brain activation patterns, with the NH group employing the expected limbic and auditory regions including the left amygdala, left parahippocampus, right middle temporal gyrus and left superior temporal gyrus to a greater extent in processing affective stimuli when compared to the HL group. In the resting state study, we observed no significant differences in connectivity of the auditory network between the groups. In the dorsal attention network, HL patients exhibited decreased connectivity between seed regions and left insula and left postcentral gyrus compared to controls. The default mode network was also altered, showing increased connectivity between seeds and left middle frontal gyrus in the HL group. Further targeted analysis revealed increased intrinsic connectivity between the right middle temporal gyrus and the right precentral gyrus. The results from both studies suggest neuronal reorganization as a consequence of hearing loss, most notably in networks responding to emotional sounds.
Schwartze, Michael; Kotz, Sonja A
The role of the cerebellum in the anatomical and functional architecture of the brain is a matter of ongoing debate. We propose that cerebellar temporal processing contributes to speech perception on a number of accounts: temporally precise cerebellar encoding and rapid transmission of an event-based representation of the temporal structure of the speech signal serves to prepare areas in the cerebral cortex for the subsequent perceptual integration of sensory information. As speech dynamically evolves in time this fundamental preparatory function may extend its scope to the predictive allocation of attention in time and supports the fine-tuning of temporally specific models of the environment. In this framework, an oscillatory account considering a range of frequencies may best serve the linking of the temporal and speech processing systems. Lastly, the concerted action of these processes may not only advance predictive adaptation to basic auditory dynamics but optimize the perceptual integration of speech. Copyright © 2015 Elsevier Inc. All rights reserved.
Alvarez, Waleska; Fuente, Adrian; Coloma, Carmen Julia; Quezada, Camilo
Many authors have suggested that a perceptual auditory disorder involving temporal processing is the primary cause of Specific Language Impairment (SLI). The aim of this study was to compare the performance of children with and without SLI on a temporal processing task controlling for the confounding of verbal short-term memory and working memory. Thirty participants with SLI aged 6 years were selected, along with 30 age- and gender-matched participants with typical language development. The Adaptive Test of Temporal Resolution (ATTR) was used to evaluate temporal resolution ability (an aspect of temporal processing), and the digit span subtest of the Wechsler Intelligence Scale for Children was used to evaluate auditory short-term memory and working memory. The analysis of covariance showed that children with SLI performed significantly worse than children with typical language development on the temporal resolution task (ATTR), even when controlling for short-term memory and working memory. Statistically significant correlations between ATTR and digit span were found for the group of children with SLI but not for the children with typical language development. Children with SLI showed significantly worse temporal resolution ability than their peers with typical language development. Such differences cannot be attributed solely to the immediate memory deficit associated with SLI. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Azouz, Hanan Galal; Kozou, Hesham; Khalil, Mona; Abdou, Rania M; Sakr, Mohamed
To study the auditory profile at different levels of the auditory system in children with ASD and to verify the role of (Central) auditory processing disorder as an essential pathology of the autistic disorder or as an associated co-morbidity, and to establish the correlation between CAP findings and the language delay in these cases. The study included 30 children with definite autistic disorder according to DSM-IV-TR criteria and ADI-R among those attending the outpatient neuropsychiatry clinic of Alexandria University Children Hospital at El Shatby. An informed consent was taken from all patients in this part of the study. Confidentiality of the records was maintained. All cases were subjected to complete history taking and examination; special assessment to language skills and evoked potentials were done. The results concluded that (central) auditory processing disorder is an essential pathology of the autistic disorder. Autistic children possess a dysfunctioning or an immature central auditory nervous system at both the brainstem and cortical levels. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Fosi, Tangunu; Werner, Klaus; Boyd, Stewart G; De Haan, Michelle; Scott, Rod C; Neville, Brian G
To investigate acoustic auditory processing in patients with recent infantile spasms (IS). Patients (n = 22; 12 female; median age 8 months; range 5-11 months) had normal preceding development, brain magnetic resonance imaging (MRI), and neurometabolic testing (West syndrome of unknown cause, uWS). Controls were healthy babies (n = 22; 11 female; median age 6 months; range 3-12 months). Event-related potentials (ERPs) and psychometry (Bayley Scales of Infant Development, Second Edition, BSID-II) took place at a month following IS remission. Following a repeated pure tone, uWS patients showed less suppression of the N100 at the mid-temporal electrodes (p = 0.006), and a prolonged response latency (p = 0.019). Their novelty P300 amplitude over the mid-temporal electrodes was halved (p = 0.001). The peak of the novelty P300 to environmental broadband sounds emerged later over the left temporal lobe in patients (p = 0.015), the lag correlating with duration of spasms (r = 0.547, p = 0.015). BSID-II scores were lower in patients (p < 0.001), with no correlation to ERP. Complex acoustic information is processed poorly following IS. This would impair language. Treatment did not reverse this phenomenon, but may have limited its severity. The data are most consistent with altered connectivity of the cortical acoustic processing areas induced by IS. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.
Simpson, M.I.G.; Barnes, G.R.; Johnson, S.R.; Hillebrand, A.; Singh, K.D.; Green, G.G.R.
Speech contains complex amplitude modulations that have envelopes with multiple temporal cues. The processing of these complex envelopes is not well explained by the classical models of amplitude modulation processing. This may be because the evidence for the models typically comes from the use of
Buchholz, Jörg; Kerketsos, P
filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...
Iliadou, V; Iakovides, S
Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Design A MEDLINE search was conducted to search for papers concerning the association between Central Auditory Processing Disorders and mental disorders. The research focused on the diagnostic methods providing the inter-connection of various mental disorders and central auditory deficits. Measurements and Main Results The medline research revealed 564 papers when using the keywords 'auditory deficits' and 'mental disorders'. 79 papers were referring specifically to Central Auditory Processing Disorders in connection with mental disorders. 175 papers were related to Schizophrenia, 126 to learning disabilities, 29 to Parkinson's disease, 88 to dyslexia and 39 to Alzheimer's disease. Assessment of the Central Auditory System is carried out through a great variety of tests that fall into two main categories: psychoacoustic and electrophysiologic testing. Different specialties are involved in the diagnosis and management of Central Auditory Processing Disorders as well as the mental disorders that may co-exist with them. As a result it is essential that they are all aware of the possibilities in diagnostic procedures. Conclusions Considerable evidence exists that mental disorders may correlate with CAPD and this correlation could be revealed through psychoacoustics and neuroaudiology. Mental disorders that relate to Central Auditory Processing Disorders are: Schizophrenia, attention deficit disorders, Alzheimer's disease, learning disabilities, dyslexia, depression, auditory
Keller, Warren D; Tillery, Kim L; McFadden, Sandra L
To determine whether children with a nonverbal learning disability (NVLD) have a higher incidence of auditory processing disorder (APD), especially in the tolerance-fading memory type of APD, and what associations could be found between performance on neuropsychological, intellectual, memory, and academic measures and APD. Eighteen children with NVLD ranging in age from 6 to 18 years received a central auditory processing test battery to determine incidence and subtype of APD. Psychological measures for assessment of NVLD included the Wechsler Scales, Wide Range Assessment of Memory and Learning, and Wechsler Individual Achievement Test. Neuropsychological measures included the Category Test, Trails A and B, the Tactual Performance Test, Grooved Pegs, and the Speech Sounds Perception Test. Neuropsychological test scores of the NVLD+APD and NVLD groups were compared using analysis of covariance procedures, with Verbal IQ and Performance IQ as covariates. Sixty-one percent of the children were diagnosed with APD, primarily in the tolerance-fading memory subtype. The group of children with APD and NVLD had significantly lower scores on Verbal IQ, Digit Span, Sentence Memory, Block Design, and Speech Sounds Perception than children without APD. An ancillary finding was that the incidence of attention deficit/hyperactivity disorder was significantly higher in children with NVLD (with and without APD) than in the general population. The results indicate that children with NVLD are at risk for APD and that there are several indicators on neuropsychological assessment suggestive of APD. Collaborative, interdisciplinary evaluation of children with learning disorders is needed in order to provide effective therapeutic interventions.
Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher; Møllegaard Jepsen, Jens Richardt; Durston, Sarah; Cantio, Cathriona; Glenthøj, Birte; Bilenberg, Niels
Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism are highly inconsistent, partly due to small sample sizes in the studies and differences in MMN paradigms. Therefore, in the current study, MMN and P3a amplitude were assessed in a relatively large sample of children with ASD, using a more extensive MMN paradigm and compared with that of typically developing children (TDC). Thirty-five children (aged 8-12 years) with ASD and 38 age and gender matched TDC were assessed with a MMN paradigm with three types of deviants, i.e., frequency, duration and a combination of these two. MMN elicited by duration and frequency-duration deviants was significantly reduced in the ASD group. P3a-amplitude elicited by duration deviants was significantly increased in the ASD group. Reduced MMN in children with ASD suggests that children with ASD may be less responsive to environmentally deviant stimuli at an early (sensory) level. P3a-amplitude was increased in ASD, implying a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Demopoulos, Carly; Brandes-Aitken, Annie N; Desai, Shivani S; Hill, Susanna S; Antovich, Ashley D; Harris, Julia; Marco, Elysa J
The aim of this study was to compare sensory processing in typically developing children (TDC), children with Autism Spectrum Disorder (ASD), and those with sensory processing dysfunction (SPD) in the absence of an ASD. Performance-based measures of auditory and tactile processing were compared between male children ages 8-12 years assigned to an ASD (N=20), SPD (N=15), or TDC group (N=19). Both the SPD and ASD groups were impaired relative to the TDC group on a performance-based measure of tactile processing (right-handed graphesthesia). In contrast, only the ASD group showed significant impairment on an auditory processing index assessing dichotic listening, temporal patterning, and auditory discrimination. Furthermore, this impaired auditory processing was associated with parent-rated communication skills for both the ASD group and the combined study sample. No significant group differences were detected on measures of left-handed graphesthesia, tactile sensitivity, or form discrimination; however, more participants in the SPD group demonstrated a higher tactile detection threshold (60%) compared to the TDC (26.7%) and ASD groups (35%). This study provides support for use of performance-based measures in the assessment of children with ASD and SPD and highlights the need to better understand how sensory processing affects the higher order cognitive abilities associated with ASD, such as verbal and non-verbal communication, regardless of diagnostic classification.
The 'rapid temporal processing' and the 'temporal sampling framework' hypotheses have been proposed to account for the deficits in language and literacy development seen in specific language impairment and dyslexia. This paper reviews these hypotheses and concludes that the proposed causal chains between the presumed auditory processing deficits and the observed behavioural manifestation of the disorders are vague and not well established empirically. Several problems and limitations are identified. Most data concern correlations between distantly related tasks, and there is considerable heterogeneity and variability in performance as well as concerns about reliability and validity. Little attention is paid to the distinction between ostensibly perceptual and metalinguistic tasks or between implicit and explicit modes of performance, yet measures are assumed to be pure indicators of underlying processes or representations. The possibility that diagnostic categories do not refer to causally and behaviourally homogeneous groups needs to be taken seriously, taking into account genetic and neurodevelopmental studies to construct multiple-risk models. To make progress in the field, cognitive models of each task must be specified, including performance domains that are predicted to be deficient versus intact, testing multiple indicators of latent constructs and demonstrating construct reliability and validity.
Moser, Dana; Baker, Julie M; Sanchez, Carmen E; Rorden, Chris; Fridriksson, Julius
Speech processing requires the temporal parsing of syllable order. Individuals suffering from posterior left hemisphere brain injury often exhibit temporal processing deficits as well as language deficits...
Neijenhuis, C.A.M.; Beynon, A.J.; Snik, A.F.M.; Engelen, B.G.M. van; Broek, P. van den
HYPOTHESIS: It is unclear whether Charcot-Marie-Tooth (CMT) disease, type 1A, causes auditory processing disorders. Therefore, auditory processing abilities were investigated in five CMT1A patients with normal hearing. BACKGROUND: Previous studies have failed to separate peripheral from central
Kamhi, Alan G.
Purpose: To consider whether auditory processing disorder (APD) is truly a distinct clinical entity or whether auditory problems are more appropriately viewed as a processing deficit that may occur with various developmental disorders. Method: Theoretical and clinical factors associated with APD are critically evaluated. Results: There are…
Megino-Elvira, Laura; Martín-Lobo, Pilar; Vergara-Moragues, Esperanza
The authors' aim was to analyze the relationship of eye movements, auditory perception, and phonemic awareness with the reading process. The instruments used were the King-Devick Test (saccade eye movements), the PAF test (auditory perception), the PFC (phonemic awareness), the PROLEC-R (lexical process), the Canals reading speed test, and the…
Cristina F.B. Murphy
Full Text Available Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor years of schooling was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Caroline Nunes Rocha-Muniz
Full Text Available INTRODUCTION: Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing.OBJECTIVE: To investigate complex acoustic signals (speech encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm.METHODS: It was a prospective study. 75 children (6-12 years participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent and /da/ (rare. Measures of mismatch negativity latency and two amplitude measures were analyzed.RESULTS: It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development.CONCLUSION: These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception.
Rocha-Muniz, Caroline Nunes; Befi-Lopes, Débora Maria; Schochat, Eliane
Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing. To investigate complex acoustic signals (speech) encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory processing disorders and typical development through the mismatch negativity paradigm. It was a prospective study. 75 children (6-12 years) participated in this study: 25 children with specific language impairment, 25 with auditory processing disorders, and 25 with typical development. Mismatch negativity was obtained by subtracting from the waves obtained by the stimuli /ga/ (frequent) and /da/ (rare). Measures of mismatch negativity latency and two amplitude measures were analyzed. It was possible to verify an absence of mismatch negativity in 16% children with specific language impairment and 24% children with auditory processing disorders. In the comparative analysis, auditory processing disorders and specific language impairment showed higher latency values and lower amplitude values compared to typical development. These data demonstrate changes in the automatic discrimination of crucial acoustic components of speech sounds in children with specific language impairment and auditory processing disorders. It could indicate problems in physiological processes responsible for ensuring the discrimination of acoustic contrasts in pre-attentional and pre-conscious levels, contributing to poor perception. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)
Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)
Kale, Sushrut; Heinz, Michael G
The ability of auditory-nerve (AN) fibers to encode modulation frequencies, as characterized by temporal modulation transfer functions (TMTFs), generally shows a low-pass shape with a cut-off frequency that increases with fiber characteristic frequency (CF). Because AN-fiber bandwidth increases with CF, this result has been interpreted to suggest that peripheral filtering has a significant effect on limiting the encoding of higher modulation frequencies. Sensorineural hearing loss (SNHL), which is typically associated with broadened tuning, is thus predicted to increase the range of modulation frequencies encoded; however, perceptual studies have generally not supported this prediction. The present study sought to determine whether the range of modulation frequencies encoded by AN fibers is affected by SNHL, and whether the effects of SNHL on envelope coding are similar at all modulation frequencies within the TMTF passband. Modulation response gain for sinusoidally amplitude modulated (SAM) tones was measured as a function of modulation frequency, with the carrier frequency placed at fiber CF. TMTFs were compared between normal-hearing chinchillas and chinchillas with a noise-induced hearing loss for which AN fibers had significantly broadened tuning. Synchrony and phase responses for individual SAM tone components were quantified to explore a variety of factors that can influence modulation coding. Modulation gain was found to be higher than normal in noise-exposed fibers across the entire range of modulation frequencies encoded by AN fibers. The range of modulation frequencies encoded by noise-exposed AN fibers was not affected by SNHL, as quantified by TMTF 3- and 10-dB cut-off frequencies. These results suggest that physiological factors other than peripheral filtering may have a significant role in determining the range of modulation frequencies encoded in AN fibers. Furthermore, these neural data may help to explain the lack of a consistent association
Kale, Sushrut; Heinz, Michael G.
The ability of auditory-nerve (AN) fibers to encode modulation frequencies, as characterized by temporal modulation transfer functions (TMTFs), generally shows a low-pass shape with a cut-off frequency that increases with fiber characteristic frequency (CF). Because AN-fiber bandwidth increases with CF, this result has been interpreted to suggest that peripheral filtering has a significant effect on limiting the encoding of higher modulation frequencies. Sensorineural hearing loss (SNHL), which is typically associated with broadened tuning, is thus predicted to increase the range of modulation frequencies encoded; however, perceptual studies have generally not supported this prediction. The present study sought to determine whether the range of modulation frequencies encoded by AN fibers is affected by SNHL, and whether the effects of SNHL on envelope coding are similar at all modulation frequencies within the TMTF passband. Modulation response gain for sinusoidally amplitude modulated (SAM) tones was measured as a function of modulation frequency, with the carrier frequency placed at fiber CF. TMTFs were compared between normal-hearing chinchillas and chinchillas with a noise-induced hearing loss for which AN fibers had significantly broadened tuning. Synchrony and phase responses for individual SAM-tone components were quantified to explore a variety of factors that can influence modulation coding. Modulation gain was found to be higher than normal in noise-exposed fibers across the entire range of modulation frequencies encoded by AN fibers. The range of modulation frequencies encoded by noise-exposed AN fibers was not affected by SNHL, as quantified by TMTF 3- and 10-dB cut-off frequencies. These results suggest that physiological factors other than peripheral filtering may have a significant role in determining the range of modulation frequencies encoded in AN fibers. Furthermore, these neural data may help to explain the lack of a consistent association
Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.
Biagianti, Bruno; Fisher, Melissa; Neilands, Torsten B; Loewy, Rachel; Vinogradov, Sophia
Individuals with schizophrenia who engage in targeted cognitive training (TCT) of the auditory system show generalized cognitive improvements. The high degree of variability in cognitive gains maybe due to individual differences in the level of engagement of the underlying neural system target. 131 individuals with schizophrenia underwent 40 hours of TCT. We identified target engagement of auditory system processing efficiency by modeling subject-specific trajectories of auditory processing speed (APS) over time. Lowess analysis, mixed models repeated measures analysis, and latent growth curve modeling were used to examine whether APS trajectories were moderated by age and illness duration, and mediated improvements in cognitive outcome measures. We observed significant improvements in APS from baseline to 20 hours of training (initial change), followed by a flat APS trajectory (plateau) at subsequent time-points. Participants showed interindividual variability in the steepness of the initial APS change and in the APS plateau achieved and sustained between 20 and 40 hours. We found that participants who achieved the fastest APS plateau, showed the greatest transfer effects to untrained cognitive domains. There is a significant association between an individual's ability to generate and sustain auditory processing efficiency and their degree of cognitive improvement after TCT, independent of baseline neurocognition. APS plateau may therefore represent a behavioral measure of target engagement mediating treatment response. Future studies should examine the optimal plateau of auditory processing efficiency required to induce significant cognitive improvements, in the context of interindividual differences in neural plasticity and sensory system efficiency that characterize schizophrenia. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Murphy, Cristina Ferraz Borges; Pontes, Fernanda; Stivanin, Luciene; Picoli, Erica; Schochat, Eliane
Children and adolescents who live in situations of social vulnerability present a series of health problems. Nonetheless, affirmations that sensory and cognitive abnormalities are present are a matter of controversy. The aim of this study was to investigate aspects to auditory processing, through applying the brainstem auditory evoked potential (BAEP) and behavioral auditory processing tests to children living on the streets, and comparison with a control group. Cross-sectional study in the Laboratory of Auditory Processing, School of Medicine, Universidade de São Paulo. The auditory processing tests were applied to a group of 27 individuals, subdivided into 11 children (7 to 10 years old) and 16 adolescents (11 to 16 years old), of both sexes, in situations of social vulnerability, compared with an age-matched control group of 10 children and 11 adolescents without complaints. The BAEP test was also applied to investigate the integrity of the auditory pathway. For both children and adolescents, there were significant differences between the study and control groups in most of the tests applied, with significantly worse performance in the study group, except in the pediatric speech intelligibility test. Only one child had an abnormal result in the BAEP test. The results showed that the study group (children and adolescents) presented poor performance in the behavioral auditory processing tests, despite their unaltered auditory brainstem pathways, as shown by their normal results in the BAEP test.
Cristina Ferraz Borges Murphy
Full Text Available CONTEXT AND OBJECTIVE: Children and adolescents who live in situations of social vulnerability present a series of health problems. Nonetheless, affirmations that sensory and cognitive abnormalities are present are a matter of controversy. The aim of this study was to investigate aspects to auditory processing, through applying the brainstem auditory evoked potential (BAEP and behavioral auditory processing tests to children living on the streets, and comparison with a control group. DESIGN AND SETTING: Cross-sectional study in the Laboratory of Auditory Processing, School of Medicine, Universidade de São Paulo. METHODS: The auditory processing tests were applied to a group of 27 individuals, subdivided into 11 children (7 to 10 years old and 16 adolescents (11 to 16 years old, of both sexes, in situations of social vulnerability, compared with an age-matched control group of 10 children and 11 adolescents without complaints. The BAEP test was also applied to investigate the integrity of the auditory pathway. RESULTS: For both children and adolescents, there were significant differences between the study and control groups in most of the tests applied, with significantly worse performance in the study group, except in the pediatric speech intelligibility test. Only one child had an abnormal result in the BAEP test. CONCLUSIONS: The results showed that the study group (children and adolescents presented poor performance in the behavioral auditory processing tests, despite their unaltered auditory brainstem pathways, as shown by their normal results in the BAEP test.
Rutkowska, Joanna; Łobaczuk-Sitnik, Anna; Kosztyła-Hojna, Bożena
Increasing numbers of hearing pathology is auditory processing disorders. Auditory Processing Disorders (APD) are defined as difficulty in using auditory information to communicate and learn in the presence of normal peripheral hearing. It may be recognized as a problem with understanding of speech in noise and perception disorder of distorted speech. APD may accompany to articulation disorders, language problems and difficulties in reading and writing. The diagnosis of auditory processing disorders causes many difficulties primarily due to the lack of common testing procedures, precise criteria for qualification to the group of norm and pathology. The Brain-Boy Universal Professional (BUP) is one of diagnostics tools. It enables to assess the higher auditory functions. The aim of the study was preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children. The questionnaire of hearing difficulties and BUP was used. Study includes 20 participants 2nd grade students of elementary school. The examination of the basic central functions was carried out with BUP. The parents and teacher complete the questionnaire to evaluate the hearing problems. Studies carried out indicate that the 40% schoolchild have hearing difficulties. The high percentage of deficits in auditory functions was confirmed with research results of medical device and the questionnaire for teacher. On the basis of the studies conducted may establish that the Warnke Method can serve as preliminary assessment of hearing difficulties that may suggest the occurrence of auditory processing disorders in children.
Lanzetta-Valdo, Bianca Pinheiro; Oliveira, Giselle Alves de; Ferreira, Jane Tagarro Correa; Palacios, Ester Miyuki Nakamura
Introduction Children with Attention Deficit Hyperactivity Disorder can present Auditory Processing (AP) Disorder. Objective The study examined the AP in ADHD children compared with non-ADHD children, and before and after 3 and 6 months of methylphenidate (MPH) treatment in ADHD children. Methods Drug-naive children diagnosed with ADHD combined subtype aging between 7 and 11 years, coming from public and private outpatient service or public and private school, and age-gender-matched non-ADHD children, participated in an open, non-randomized study from February 2013 to December 2013. They were submitted to a behavioral battery of AP tests comprising Speech with white Noise, Dichotic Digits (DD), and Pitch Pattern Sequence (PPS) and were compared with non-ADHD children. They were followed for 3 and 6 months of MPH treatment (0.5 mg/kg/day). Results ADHD children presented larger number of errors in DD ( p < 0.01), and less correct responses in the PPS ( p < 0.0001) and in the SN ( p < 0.05) tests when compared with non-ADHD children. The treatment with MPH, especially along 6 months, significantly decreased the mean errors in the DD ( p < 0.01) and increased the correct response in the PPS ( p < 0.001) and SN ( p < 0.01) tests when compared with the performance before MPH treatment. Conclusions ADHD children show inefficient AP in selected behavioral auditory battery suggesting impaired in auditory closure, binaural integration, and temporal ordering. Treatment with MPH gradually improved these deficiencies and completely reversed them by reaching a performance similar to non-ADHD children at 6 months of treatment.
Favrot, Sylvain Emmanuel; Buchholz, Jörg
the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....
Chung, Wei-Lun; Jarmulowicz, Linda; Bidelman, Gavin M.
This study examined language-specific links among auditory processing, linguistic prosody awareness, and Mandarin (L1) and English (L2) word reading in 61 Mandarin-speaking, English-learning children. Three auditory discrimination abilities were measured: pitch contour, pitch interval, and rise time (rate of intensity change at tone onset).…
Varnhagen, Connie K.; And Others
Auditory and visual memory span were examined with 13 Down Syndrome and 15 other trainable mentally retarded young adults. Although all subjects demonstrated relatively poor auditory memory span, Down Syndrome subjects were especially poor at long-term memory access for visual stimulus identification and short-term storage and processing of…
Stollman, M.H.P.; Velzen, E.C. van; Simkens, H.M.F.; Snik, A.F.M.; Broek, P. van den
The development of auditory processing in children was investigated in a longitudinal study. A group of 20 children with normal cognitive and language development underwent several auditory tests at the ages of 6, 7, 8, 10 and 12 years. At the age of 10 years, three subjects were lost to follow-up,
Lamy, Dominique; Mudrik, Liad; Deouell, Leon Y
Whether information perceived without awareness can affect overt performance, and whether such effects can cross sensory modalities, remains a matter of debate. Whereas influence of unconscious visual information on auditory perception has been documented, the reverse influence has not been reported. In addition, previous reports of unconscious cross-modal priming relied on procedures in which contamination of conscious processes could not be ruled out. We present the first report of unconscious cross-modal priming when the unaware prime is auditory and the test stimulus is visual. We used the process-dissociation procedure [Debner, J. A., & Jacoby, L. L. (1994). Unconscious perception: Attention, awareness and control. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 304-317] which allowed us to assess the separate contributions of conscious and unconscious perception of a degraded prime (either seen or heard) to performance on a visual fragment-completion task. Unconscious cross-modal priming (auditory prime, visual fragment) was significant and of a magnitude similar to that of unconscious within-modality priming (visual prime, visual fragment). We conclude that cross-modal integration, at least between visual and auditory information, is more symmetrical than previously shown, and does not require conscious mediation.
Barker, Matthew D; Kuruvilla-Mathew, Abin; Purdy, Suzanne C
The relationship between auditory processing (AP) and reading is thought to be significant; however our understanding of this relationship is somewhat limited. Previous studies have investigated the relation between certain electrophysiological and behavioral measures of AP and reading abilities in children. This study attempts to further understand that relation. Differences in AP between good and poor readers were investigated using electrophysiological and behavioral measures. Thirty-two children (15 female) aged 9-11 yr were placed in either a good reader group or poor reader group, based on the scores of a nationally normed reading test in New Zealand. Children were initially tested using an automated behavioral measuring system that runs through a tablet computer known as "Feather Squadron." Following the administration of Feather Squadron, cortical auditory-evoked potentials (CAEPs) were recorded using a speech stimulus (/m/) with the HEARLab(®) Cortical Auditory Evoked Potential Analyzer. The children were evaluated on eight subsections of the Feather Squadron, and CAEP waveform peaks were visually identified and averaged. Separate Kruskal-Wallis analyses were performed for the behavioral and electrophysiological variables, with group (good versus poor readers) serving as the between-group independent variable and scores from the Feather Squadron AP tasks as well as CAEP latencies and amplitudes as dependent variables. After the children's AP status was determined, the entire group was further divided into three groups: typically developing, auditory processing disorder + reading difficulty (APD + RD), and RDs only. Statistical analyses were repeated for these subgroups. Poorer readers showed significantly worse scores than the good readers for the Tonal Pattern 1, Tonal Pattern 2, and Word Double Dichotic Right tasks. CAEP differences observed across groups indicated comorbid effects of RD and AP difficulties. N2 amplitude was significantly smaller for
Kamhi, Alan G.; Beasley, Daniel S.
The article demonstrates how professional and theoretical perspectives (including psycholinguistics, behaviorist, and information processing perspectives) significantly influence the manner in which central auditory processing is viewed, assessed, and remediated. (Author/CL)
Full Text Available The full-fledged processing of temporal information presents specific challenges. These difficulties largely stem from the fact that the temporal meaning conveyed by grammatical means interacts with many extra-linguistic factors (world knowledge, causality, calendar systems, reasoning. This article proposes a novel approach to this problem, based on a hybrid strategy that explores the complementarity of the symbolic and probabilistic methods. A specialized temporal extraction system is combined with a deep linguistic processing grammar. The temporal extraction system extracts eventualities, times and dates mentioned in text, and also temporal relations between them, in line with the tasks of the recent TempEval challenges; and uses machine learning techniques to draw from different sources of information (grammatical and extra-grammatical even if it is not explicitly known how these combine to produce the final temporal meaning being expressed. In turn, the deep computational grammar delivers richer truth-conditional meaning representations of input sentences, which include a principled representation of temporal information, on which higher level tasks, including reasoning, can be based. These deep semantic representations are extended and improved according to the output of the aforementioned temporal extraction module. The prototype implemented shows performance results that increase the quality of the temporal meaning representations and are better than the performance of each of the two components in isolation.
François, Clément; Schön, Daniele
There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.
Josue G. Yague
Full Text Available The basal forebrain (BF has long been implicated in attention, learning and memory, and recent studies have established a causal relationship between artificial BF activation and arousal. However, neural ensemble dynamics in the BF still remains unclear. Here, recording neural population activity in the BF and comparing it with simultaneously recorded cortical population under both anesthetized and unanesthetized conditions, we investigate the difference in the structure of spontaneous population activity between the BF and the auditory cortex (AC in mice. The AC neuronal population show a skewed spike rate distribution, a higher proportion of short (≤80 ms inter-spike intervals (ISIs and a rich repertoire of rhythmic firing across frequencies. Although the distribution of spontaneous firing rate in the BF is also skewed, a proportion of short ISIs can be explained by a Poisson model at short time scales (≤20 ms and spike count correlations are lower compared to AC cells, with optogenetically identified cholinergic cell pairs showing exceptionally higher correlations. Furthermore, a smaller fraction of BF neurons shows spike-field entrainment across frequencies: a subset of BF neurons fire rhythmically at slow (≤6 Hz frequencies, with varied phase preferences to ongoing field potentials, in contrast to a consistent phase preference of AC populations. Firing of these slow rhythmic BF cells is correlated to a greater degree than other rhythmic BF cell pairs. Overall, the fundamental difference in the structure of population activity between the AC and BF is their temporal coordination, in particular their operational timescales. These results suggest that BF neurons slowly modulate downstream populations whereas cortical circuits transmit signals on multiple timescales. Thus, the characterization of the neural ensemble dynamics in the BF provides further insight into the neural mechanisms, by which brain states are regulated.
Ahveninen, Jyrki; Huang, Samantha; Belliveau, John W.; Chang, Wei-Tang; Hämäläinen, Matti
In everyday listening situations, we need to constantly switch between alternative sound sources and engage attention according to cues that match our goals and expectations. The exact neuronal bases of these processes are poorly understood. We investigated oscillatory brain networks controlling auditory attention using cortically constrained fMRI-weighted magnetoencephalography/ electroencephalography (MEG/EEG) source estimates. During consecutive trials, subjects were instructed to shift attention based on a cue, presented in the ear where a target was likely to follow. To promote audiospatial attention effects, the targets were embedded in streams of dichotically presented standard tones. Occasionally, an unexpected novel sound occurred opposite to the cued ear, to trigger involuntary orienting. According to our cortical power correlation analyses, increased frontoparietal/temporal 30–100 Hz gamma activity at 200–1400 ms after cued orienting predicted fast and accurate discrimination of subsequent targets. This sustained correlation effect, possibly reflecting voluntary engagement of attention after the initial cue-driven orienting, spread from the temporoparietal junction, anterior insula, and inferior frontal (IFC) cortices to the right frontal eye fields. Engagement of attention to one ear resulted in a significantly stronger increase of 7.5–15 Hz alpha in the ipsilateral than contralateral parieto-occipital cortices 200–600 ms after the cue onset, possibly reflecting crossmodal modulation of the dorsal visual pathway during audiospatial attention. Comparisons of cortical power patterns also revealed significant increases of sustained right medial frontal cortex theta power, right dorsolateral prefrontal cortex and anterior insula/IFC beta power, and medial parietal cortex and posterior cingulate cortex gamma activity after cued vs. novelty-triggered orienting (600–1400 ms). Our results reveal sustained oscillatory patterns associated with voluntary
Behroozmand, Roozbeh; Ibrahim, Nadine; Korzyukov, Oleg; Robin, Donald A; Larson, Charles R
The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM), relative pitch (RP), and absolute pitch (AP) musicians maintained vocalizations of a vowel sound and received randomized ± 100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked) fronto-central activation within the theta band (5-8 Hz) that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced) frontal activation within the delta band (1-4 Hz) that emerged at approximately 1 s after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE), indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.
Full Text Available The answer to the question of how the brain incorporates sensory feedback and links it with motor function to achieve goal-directed movement during vocalization remains unclear. We investigated the mechanisms of voice pitch motor control by examining the spectro-temporal dynamics of EEG signals when non-musicians (NM, relative pitch (RP and absolute pitch (AP musicians maintained vocalizations of a vowel sound and received randomized ±100 cents pitch-shift stimuli in their auditory feedback. We identified a phase-synchronized (evoked fronto-central activation within the theta band (5-8 Hz that temporally overlapped with compensatory vocal responses to pitch-shifted auditory feedback and was significantly stronger in RP and AP musicians compared with non-musicians. A second component involved a non-phase-synchronized (induced frontal activation within the delta band (1-4 Hz that emerged at approximately 1 second after the stimulus onset. The delta activation was significantly stronger in the NM compared with RP and AP groups and correlated with the pitch rebound error (PRE, indicating the degree to which subjects failed to re-adjust their voice pitch to baseline after the stimulus offset. We propose that the evoked theta is a neurophysiological marker of enhanced pitch processing in musicians and reflects mechanisms by which humans incorporate auditory feedback to control their voice pitch. We also suggest that the delta activation reflects adaptive neural processes by which vocal production errors are monitored and used to update the state of sensory-motor networks for driving subsequent vocal behaviors. This notion is corroborated by our findings showing that larger PREs were associated with greater delta band activity in the NM compared with RP and AP groups. These findings provide new insights into the neural mechanisms of auditory feedback processing for vocal pitch motor control.
Hutsler, Jeffrey J
Functional lateralization of language within the cerebral cortex has long driven the search for structural asymmetries that might underlie language asymmetries. Most examinations of structural asymmetry have focused upon the gross size and shape of cortical regions in and around language areas. In the last 20 years several labs have begun to document microanatomical asymmetries in the structure of language-associated cortical regions. Such microanatomic results provide useful constraints and clues to our understanding of the biological bases of language specialization in the cortex. In a previous study we documented asymmetries in the size of a specific class of pyramidal cells in the superficial cortical layers. The present work uses a nonspecific stain for cell bodies to demonstrate the presence of an asymmetry in layer III pyramidal cell sizes within auditory, secondary auditory and language-associated regions of the temporal lobes. Specifically, the left hemisphere contains a greater number of the largest pyramidal cells, those that are thought to be the origin of long-range cortico-cortical connections. These results are discussed in the context of cortical columns and how such an asymmetry might alter cortical processing. These findings, in conjunction with other asymmetries in cortical organization that have been documented within several labs, clearly demonstrate that the columnar and connective structure of auditory and language cortex in the left hemisphere is distinct from homotopic regions in the contralateral hemisphere.
Full Text Available Auditory processing disorder (APD affects about 2 to 5% of children. However, the nature of this disorder is poorly understood. Children with APD typically have difficulties in complex listening situations. One mechanism thought to aid in listening-in-noise is the medial olivocochlear (MOC inhibition. The purpose of this review was to critically analyze the published data on MOC inhibition in children with APD to determine whether the MOC efferents are involved in these individuals. The otoacoustic emission (OAE methods used to assay MOC reflex were examined in the context of the current understanding of OAE generation mechanisms. Relevant literature suggests critical differences in the study population and OAE methods. Variables currently known to influence MOC reflex measurements, for example, middle-ear muscle reflexes or OAE signal-to-noise ratio, were not controlled by most studies. The use of potentially weaker OAE methods and the remarkable heterogeneity across studies does not allow for a definite conclusion whether or not the MOC reflex is altered in children with APD. Further carefully designed studies are needed. Knowledge of efferent functioning in children with APD would be mechanistically and clinically beneficial.
Full Text Available P300 Auditory Event-Related Potentials (P3AERPs were recorded in nine school-age children with auditory processing disorders and nine age- and gender-matched controls in response to tone burst stimuli presented at varying rates (1/second or 3/second under varying levels of competing noise (0 dB, 40 dB, or 60 dB SPL. Neural network modeling results indicated that speed of information processing and task-related demands significantly influenced P3AERP latency in children with auditory processing disorders. Competing noise and rapid stimulus rates influenced P3AERP amplitude in both groups.
Costa-Faidella, Jordi; Baldeweg, Torsten; Grimm, Sabine; Escera, Carles
Neural activity in the auditory system decreases with repeated stimulation, matching stimulus probability in multiple timescales. This phenomenon, known as stimulus-specific adaptation, is interpreted as a neural mechanism of regularity encoding aiding auditory object formation. However, despite the overwhelming literature covering recordings from single-cell to scalp auditory-evoked potential (AEP), stimulation timing has received little interest. Here we investigated whether timing predictability enhances the experience-dependent modulation of neural activity associated with stimulus probability encoding. We used human electrophysiological recordings in healthy participants who were exposed to passive listening of sound sequences. Pure tones of different frequencies were delivered in successive trains of a variable number of repetitions, enabling the study of sequential repetition effects in the AEP. In the predictable timing condition, tones were delivered with isochronous interstimulus intervals; in the unpredictable timing condition, interstimulus intervals varied randomly. Our results show that unpredictable stimulus timing abolishes the early part of the repetition positivity, an AEP indexing auditory sensory memory trace formation, while leaving the later part (≈ >200 ms) unaffected. This suggests that timing predictability aids the propagation of repetition effects upstream the auditory pathway, most likely from association auditory cortex (including the planum temporale) toward primary auditory cortex (Heschl's gyrus) and beyond, as judged by the timing of AEP latencies. This outcome calls for attention to stimulation timing in future experiments regarding sensory memory trace formation in AEP measures and stimulus probability encoding in animal models.
Engineer, C. T.; Centanni, T. M.; Im, K.W.; Borland, M.S.; Moreno, N.A.; Carraway, R. S.; Wilson, L. G.; Kilgard, M. P.
Although individuals with autism are known to have significant communication problems, the cellular mechanisms responsible for impaired communication are poorly understood. Valproic acid (VPA) is an anticonvulsant that is a known risk factor for autism in prenatally exposed children. Prenatal VPA exposure in rats causes numerous neural and behavioral abnormalities that mimic autism. We predicted that VPA exposure may lead to auditory processing impairments which may contribute to the deficits...
Wit, E. de; Dijk, P. van; Hanekamp, S.; Visser-Bochane, M.I.; Steenbergen, B.; Schans, C.P. van der; Luinge, M.R.
Objectives: Children diagnosed with auditory processing disorders (APD) experience difficulties in auditory functioning and with memory, attention, language, and reading tasks. However, it is not clear whether the behavioral characteristics of these children are distinctive from the behavioral
James W Lewis
Full Text Available Whether viewed or heard, an object in action can be segmented from a background scene based on a number of different sensory cues. In the visual system, salient low-level attributes of an image are processed along parallel hierarchies, and involve intermediate stages, such as the lateral occipital cortices, wherein gross-level object form features are extracted prior to stages that show object specificity (e.g. for faces, buildings, or tools. In the auditory system, though relying on a rather different set of low-level signal attributes, a distinct acoustic event or auditory object can also be readily extracted from a background acoustic scene. However, it remains unclear whether cortical processing strategies used by the auditory system similarly extract gross-level aspects of acoustic object form that may be inherent to many real-world sounds. Examining mechanical and environmental action sounds, representing two distinct categories of non-biological and non-vocalization sounds, we had participants assess the degree to which each sound was perceived as a distinct object versus an acoustic scene. Using two functional magnetic resonance imaging (fMRI task paradigms, we revealed bilateral foci along the superior temporal gyri (STG showing sensitivity to the object-ness ratings of action sounds, independent of the category of sound and independent of task demands. Moreover, for both categories of sounds these regions also showed parametric sensitivity to spectral structure variations—a measure of change in entropy in the acoustic signals over time (acoustic form—while only the environmental sounds showed parametric sensitivity to mean entropy measures. Thus, similar to the visual system, the auditory system appears to include intermediate feature extraction stages that are sensitive to the acoustic form of action sounds, and may serve as a stage that begins to dissociate different categories of real-world auditory objects.
Liu, Xiaolin; Lauer, Kathryn K; Ward, Barney D; Rao, Stephen M; Li, Shi-Jiang; Hudetz, Anthony G
Current theories suggest that disrupting cortical information integration may account for the mechanism of general anesthesia in suppressing consciousness. Human cognitive operations take place in hierarchically structured neural organizations in the brain. The process of low-order neural representation of sensory stimuli becoming integrated in high-order cortices is also known as cognitive binding. Combining neuroimaging, cognitive neuroscience, and anesthetic manipulation, we examined how cognitive networks involved in auditory verbal memory are maintained in wakefulness, disrupted in propofol-induced deep sedation, and re-established in recovery. Inspired by the notion of cognitive binding, an functional magnetic resonance imaging-guided connectivity analysis was utilized to assess the integrity of functional interactions within and between different levels of the task-defined brain regions. Task-related responses persisted in the primary auditory cortex (PAC), but vanished in the inferior frontal gyrus (IFG) and premotor areas in deep sedation. For connectivity analysis, seed regions representing sensory and high-order processing of the memory task were identified in the PAC and IFG. Propofol disrupted connections from the PAC seed to the frontal regions and thalamus, but not the connections from the IFG seed to a set of widely distributed brain regions in the temporal, frontal, and parietal lobes (with exception of the PAC). These later regions have been implicated in mediating verbal comprehension and memory. These results suggest that propofol disrupts cognition by blocking the projection of sensory information to high-order processing networks and thus preventing information integration. Such findings contribute to our understanding of anesthetic mechanisms as related to information and integration in the brain. Copyright © 2011 Wiley Periodicals, Inc.
Kunchulia, Marina; Pilz, Karin S; Herzog, Michael H
Alcohol affects vision. However, the influence of alcohol on visual processing is largely unknown. Here, we investigated the effects of alcohol on visual spatiotemporal processing. We employed a visual paradigm, the shine through backward masking paradigm, in which a vernier is either presented alone or followed by a variety of mask. We investigated performance for women at blood alcohol levels of 0mg/kg, 400mg/kg and 600 mg/kg and for men at 0mg/kg, 400mg/kg and 800 mg/kg. When the vernier was presented alone, vernier offset discrimination was not affected by alcohol. When the vernier was followed by a mask, stimulus onset asynchronies (SOAs) between target and mask were significantly longer after alcohol intake. However, as a second experiment showed, spatial and temporal processing per se were not impaired by alcohol. In addition, spatial processing was not affected by moderate alcohol consumption. Hence, moderate consumption of alcohol does not affect visual processing per se. We propose that the longer SOAs after alcohol intake are related to changes in mechanisms of target stabilization rather than changes in spatial and temporal sensitivity as has been previously suggested. Copyright © 2012 Elsevier Ltd. All rights reserved.
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.
The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...
Ress, Norma S.
Reviewed is research which has investigated failure in auditory processing as a cause of language and learning disorders (including defective articulation, aphasia, dyslexia, and specific learning disability) in children and adults. (Author/LS)
Kihara, Michael; Hogan, Alexandra M.; Newton, Charles R.; Garrashi, Harrun H.; Neville, Brian R.; de Haan, Michelle
Objective The aim of this study was to describe the normative development of the electrophysiological response to auditory and visual novelty in children living in rural Kenya. Methods We examined event-related potentials (ERPs) elicited by novel auditory and visual stimuli in 178 normally-developing children aged 4–12 years (86 boys, mean 6.7 years, SD 1.8 years and 92 girls, mean 6.6 years, SD 1.5 years) who were living in rural Kenya. Results The latency of early components (auditory P1 and visual N170) decreased with age and their amplitudes also tended to decrease with age. The changes in longer-latency components (Auditory N2, P3a and visual Nc, P3a) were more modality-specific; the N2 amplitude to novel stimuli decreased with age and the auditory P3a increased in both latency and amplitude with age. The Nc amplitude decreased with age while visual P3a amplitude tended to increase, though not linearly. Conclusions The changes in the timing and magnitude of early-latency ERPs likely reflect brain maturational processes. The age-related changes to auditory stimuli generally occurred later than those to visual stimuli suggesting that visual processing matures faster than auditory processing. Significance ERPs may be used to assess children’s cognitive development in rural areas of Africa. PMID:20080442
Stollman, Martin Hubertus Petrus
In this thesis we tested the hypotheses that the auditory system of children continues to mature until at least the age of 12 years and that the development of auditory processing in hearing-impaired and language-impaired children is often delayed or even genuinely disturbed. Data from a
Porges, Stephen W; Macellaio, Matthew; Stanfill, Shannon D; McCue, Kimberly; Lewis, Gregory F; Harden, Emily R; Handelman, Mika; Denver, John; Bazhenova, Olga V; Heilman, Keri J
The current study evaluated processes underlying two common symptoms (i.e., state regulation problems and deficits in auditory processing) associated with a diagnosis of autism spectrum disorders. Although these symptoms have been treated in the literature as unrelated, when informed by the Polyvagal Theory, these symptoms may be viewed as the predictable consequences of depressed neural regulation of an integrated social engagement system, in which there is down regulation of neural influences to the heart (i.e., via the vagus) and to the middle ear muscles (i.e., via the facial and trigeminal cranial nerves). Respiratory sinus arrhythmia (RSA) and heart period were monitored to evaluate state regulation during a baseline and two auditory processing tasks (i.e., the SCAN tests for Filtered Words and Competing Words), which were used to evaluate auditory processing performance. Children with a diagnosis of autism spectrum disorders (ASD) were contrasted with aged matched typically developing children. The current study identified three features that distinguished the ASD group from a group of typically developing children: 1) baseline RSA, 2) direction of RSA reactivity, and 3) auditory processing performance. In the ASD group, the pattern of change in RSA during the attention demanding SCAN tests moderated the relation between performance on the Competing Words test and IQ. In addition, in a subset of ASD participants, auditory processing performance improved and RSA increased following an intervention designed to improve auditory processing. Copyright © 2012 Elsevier B.V. All rights reserved.
Tatiane Faria Barrozo
Full Text Available ABSTRACT INTRODUCTION: Considering the importance of auditory information for the acquisition and organization of phonological rules, the assessment of (central auditory processing contributes to both the diagnosis and targeting of speech therapy in children with speech sound disorders. OBJECTIVE: To study phonological measures and (central auditory processing of children with speech sound disorder. METHODS: Clinical and experimental study, with 21 subjects with speech sound disorder aged between 7.0 and 9.11 years, divided into two groups according to their (central auditory processing disorder. The assessment comprised tests of phonology, speech inconsistency, and metalinguistic abilities. RESULTS: The group with (central auditory processing disorder demonstrated greater severity of speech sound disorder. The cutoff value obtained for the process density index was the one that best characterized the occurrence of phonological processes for children above 7 years of age. CONCLUSION: The comparison among the tests evaluated between the two groups showed differences in some phonological and metalinguistic abilities. Children with an index value above 0.54 demonstrated strong tendencies towards presenting a (central auditory processing disorder, and this measure was effective to indicate the need for evaluation in children with speech sound disorder.
Liberalesso Paulo Breno
Full Text Available Abstract Background Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP. Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75 ± 7.14 years were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT, and the Random Gap Detection Test (RGDT. Baseline (BSL performance was compared to performance after 24 hours of being sleep deprived (24hSD using the Student’s t test. Results Mean RGDT score was elevated in the 24hSD condition (8.0 ± 2.9 ms relative to the BSL condition for the whole cohort (6.4 ± 2.8 ms; p = 0.0005, for males (p = 0.0066, and for females (p = 0.0208. Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4 % ± 1.8 % vs. SD, 94.2 % ± 6.3 %. p = 0.0005(left: BSL, 96.7 % ± 3.1 % vs. SD, 92.1 % ± 6.1 %, p Conclusion Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.
Liberalesso, Paulo Breno Noronha; D'Andrea, Karlin Fabianne Klagenberg; Cordeiro, Mara L; Zeigelboim, Bianca Simone; Marques, Jair Mendes; Jurkiewicz, Ari Leon
Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP). Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75±7.14 years) were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT), and the Random Gap Detection Test (RGDT). Baseline (BSL) performance was compared to performance after 24 hours of being sleep deprived (24hSD) using the Student's t test. Mean RGDT score was elevated in the 24hSD condition (8.0±2.9 ms) relative to the BSL condition for the whole cohort (6.4±2.8 ms; p=0.0005), for males (p=0.0066), and for females (p=0.0208). Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4%±1.8% vs. SD, 94.2%±6.3%. p=0.0005)(left: BSL, 96.7%±3.1% vs. SD, 92.1%±6.1%, peffects were evident within both gender subgroups [(right: males, p=0.0080; females, p=0.0143)(left: males, p=0.0076; females: p=0.0010). Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.
One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even...... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS......) at the output of the inner-ear (cochlear) filters. The purpose of this work was to investigate these aspects in detail. One chapter studies relations between frequency selectivity, TFS processing, and speech reception in listeners with normal and impaired hearing, using behavioral listening experiments. While...
Vercammen, Ans; Knegtering, Henderikus; Bruggeman, Richard; Aleman, André
One of the most influential cognitive models of auditory verbal hallucinations (AVH) suggests that a failure to adequately monitor the production of one's own inner speech leads to verbal thought being misidentified as an alien voice. However, it is unclear whether this theory can explain the phenomenological complexity of AVH. We aimed to assess whether subjective perceptual and experiential characteristics may be linked to neural activation in the inner speech processing network. Twenty-two patients with schizophrenia and AVH underwent a 3-T functional magnetic resonance imaging scan, while performing a metrical stress evaluation task, which has been shown to activate both inner speech production and perception regions. Regions of interest (ROIs) comprising the putative inner speech network were defined using the Anatomical Automatic Labeling system. Correlations were calculated between scores on the "loudness" and "reality" subscales of the Auditory Hallucination Rating Scale (AHRS) and activation in these ROIs. Second, the AHRS subscales, and general AVH severity, indexed by the Positive and Negative Syndrome Scale, were correlated with a language lateralization index. Louder AVH were associated with reduced task-related activity in bilateral angular gyrus, anterior cingulate gyrus, left inferior frontal gyrus, left insula, and left temporal cortex. This could potentially be due to a competition for shared neural resources. Reality on the other hand was found to be associated with reduced language lateralization. Strong activation of the inner speech processing network may contribute to the subjective loudness of AVH. However, a relatively increased contribution from right hemisphere language areas may be responsible for the more complex experiential characteristics, such as the nonself source or how real AVH are.
Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte
The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In
Full Text Available The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1 in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2 in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Erika J C Laing
Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.
Marsh, John E.; Hughes, Robert W.; Jones, Dylan M.
Five experiments demonstrate auditory-semantic distraction in tests of memory for semantic category-exemplars. The effects of irrelevant sound on category-exemplar recall are shown to be functionally distinct from those found in the context of serial short-term memory by showing sensitivity to: The lexical-semantic, rather than acoustic,…
Marsh, John E.; Hughes, Robert W.; Jones, Dylan M.
Distraction by irrelevant background sound of visually-based cognitive tasks illustrates the vulnerability of attentional selectivity across modalities. Four experiments centred on auditory distraction during tests of memory for visually-presented semantic information. Meaningful irrelevant speech disrupted the free recall of semantic…
Kaipust, Jeffrey P; McGrath, Denise; Mukherjee, Mukul; Stergiou, Nicholas
Gait variability in the context of a deterministic dynamical system may be quantified using nonlinear time series analyses that characterize the complexity of the system. Pathological gait exhibits altered gait variability. It can be either too periodic and predictable, or too random and disordered, as is the case with aging. While gait therapies often focus on restoration of linear measures such as gait speed or stride length, we propose that the goal of gait therapy should be to restore optimal gait variability, which exhibits chaotic fluctuations and is the balance between predictability and complexity. In this context, our purpose was to investigate how listening to different auditory stimuli affects gait variability. Twenty-seven young and 27 elderly subjects walked on a treadmill for 5 min while listening to white noise, a chaotic rhythm, a metronome, and with no auditory stimulus. Stride length, step width, and stride intervals were calculated for all conditions. Detrended Fluctuation Analysis was then performed on these time series. A quadratic trend analysis determined that an idealized inverted-U shape described the relationship between gait variability and the structure of the auditory stimuli for the elderly group, but not for the young group. This proof-of-concept study shows that the gait of older adults may be manipulated using auditory stimuli. Future work will investigate which structures of auditory stimuli lead to improvements in functional status in older adults.
Updike, C; Thornburg, J D
The effect of recurrent middle ear disease during the first 2 years of life on auditory perceptual skills and reading ability was examined in two groups of 6- and 7-year-old children who were pair-matched by age, gender, socioeconomic status, and receptive vocabulary. Group 1 consisted of children with documented chronic otitis media at an early age, and group 2 had no history of middle ear problems. Tests of auditory perceptual skills and reading ability were administered. Significant differences in performance on all tests of auditory processing ability and reading ability were noted.
Rufener, Katharina Simone; Liem, Franziskus; Meyer, Martin
Healthy aging is typically associated with impairment in various cognitive abilities such as memory, selective attention or executive functions. Less well observed is the fact that also language functions in general and speech processing in particular seems to be affected by age. This impairment is partly caused by pathologies of the peripheral auditory nervous system and central auditory decline and in some part also by a cognitive decay. This cross-sectional electroencephalography (EEG) study investigates temporally early electrophysiological correlates of auditory related selective attention in young (20-32 years) and older (60-74 years) healthy adults. In two independent tasks, we systematically modulate the subjects' focus of attention by presenting words and pseudowords as targets and white noise stimuli as distractors. Behavioral data showed no difference in task accuracy between the two age samples irrespective of the modulation of attention. However, our work is the first to show that the N1-and the P2 component evoked by speech and nonspeech stimuli are specifically modulated in older adults and young adults depending on the subjects' focus of attention. This finding is particularly interesting in that the age-related differences in AEPs may be reflecting levels of processing that are not mirrored by the behavioral measurements.
Gilley, Phillip M; Uhler, Kristin; Watson, Kaylee; Yoshinaga-Itano, Christine
Oddball paradigms are frequently used to study auditory discrimination by comparing event-related potential (ERP) responses from a standard, high probability sound and to a deviant, low probability sound. Previous research has established that such paradigms, such as the mismatch response or mismatch negativity, are useful for examining auditory processes in young children and infants across various sleep and attention states. The extent to which oddball ERP responses may reflect subtle discrimination effects, such as speech discrimination, is largely unknown, especially in infants that have not yet acquired speech and language. Mismatch responses for three contrasts (non-speech, vowel, and consonant) were computed as a spectral-temporal probability function in 24 infants, and analyzed at the group level by a modified multidimensional scaling. Immediately following an onset gamma response (30-50 Hz), the emergence of a beta oscillation (12-30 Hz) was temporally coupled with a lower frequency theta oscillation (2-8 Hz). The spectral-temporal probability of this coupling effect relative to a subsequent theta modulation corresponds with discrimination difficulty for non-speech, vowel, and consonant contrast features. The theta modulation effect suggests that unexpected sounds are encoded as a probabilistic measure of surprise. These results support the notion that auditory discrimination is driven by the development of brain networks for predictive processing, and can be measured in infants during sleep. The results presented here have implications for the interpretation of discrimination as a probabilistic process, and may provide a basis for the development of single-subject and single-trial classification in a clinically useful context. An infant's brain is processing information about the environment and performing computations, even during sleep. These computations reflect subtle differences in acoustic feature processing that are necessary for language
Jakob, Till F; Döring, Ulrike; Illing, Robert-Benjamin
The immediate-early-gene c-fos with its protein product Fos has been used as a powerful tool to investigate neuronal activity and plasticity following sensory stimulation. Fos combines with Jun, another IEG product, to form the dimeric transcription factor activator protein 1 (AP-1) which has been implied in a variety of cellular functions like neuronal plasticity, apoptosis, and regeneration. The intracellular emergence of Fos indicates a functional state of nerve cells directed towards molecular and morphological changes. The central auditory system is construed to detect stimulus intensity, spectral composition, and binaural balance through neurons organized in a complex network of ascending, descending and commissural pathways. Here we compare monaural and binaural electrical intracochlear stimulation (EIS) in normal hearing and early postnatally deafened rats. Binaural stimulation was done either synchronously or asynchronously. The auditory brainstem of hearing and deaf rats responds differently, with a dramatically increasing Fos expression in the deaf group so as if the network had no pre-orientation for how to organize sensory activity. Binaural EIS does not result in a trivial sum of 2 independent monaural EIS, as asynchronous stimulation invokes stronger Fos activation compared to synchronous stimulation almost everywhere in the auditory brainstem. The differential response to synchronicity of the stimulation puts emphasis on the importance of the temporal structure of EIS with respect to its potential for changing brain structure and brain function in stimulus-specific ways. Copyright © 2015 Elsevier Inc. All rights reserved.
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
Peripheral hearing disorders have been frequently described in children with non-syndromic cleft lip and/or palate (NSCL/P). However, auditory processing problems are rarely considered for children with NSCL/P despite their poor academic performance in general compared to their craniofacially normal peers. This study aimed to compare auditory processing skills, using behavioral assessment techniques, in school age children with and without NSCL/P. One hundred and forty one Mandarin-speaking children with NSCL/P aged from 6.00 to 15.67 years, and 60 age-matched, craniofacially normal children, were recruited. Standard hearing health tests were conducted to evaluate peripheral hearing. Behavioral auditory processing assessment included adaptive tests of temporal resolution (ATTR), and the Mandarin pediatric lexical tone and disyllabic-word picture identification test in noise (MAPPID-N). Age effects were found in children with cleft disorder but not in the control group for gap detection thresholds with ATTR narrow band noise in the across-channel stimuli condition, with a significant difference in test performance between the 6 to 8 year group and 12 to 15 year group of children with NSCL/P. For MAPPID-N, the bilateral cleft lip and palate subgroup showed significantly poorer SNR-50% scores than the control group in the condition where speech was spatially separated from noise. Also, the cleft palate participants showed a significantly smaller spatial separation advantage for speech recognition in noise compared to the control group children. ATTR gap detection test results indicated that maturation for temporal resolution abilities was not achieved in children with NSCL/P until approximately 8 years of age compared to approximately 6 years for craniofacially normal children. For speech recognition in noisy environments, poorer abilities to use timing and intensity cues were found in children with cleft palate and children with bilateral cleft lip and palate
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Elizabeth C Hames
Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.
Nishimura, Masataka; Takemoto, Makoto; Song, Wen-Jie
The prevailing model of the primate auditory cortex proposes a core-belt-parabelt structure. The model proposes three auditory areas in the lateral belt region; however, it may contain more, as this region has been mapped only at a limited spatial resolution. To explore this possibility, we examined the auditory areas in the lateral belt region of the marmoset using a high-resolution optical imaging technique. Based on responses to pure tones, we identified multiple areas in the superior temporal gyrus. The three areas in the core region, the primary area (A1), the rostral area (R), and the rostrotemporal area, were readily identified from their frequency gradients and positions immediately ventral to the lateral sulcus. Three belt areas were identified with frequency gradients and relative positions to A1 and R that were in agreement with previous studies: the caudolateral area, the middle lateral area, and the anterolateral area (AL). Situated between R and AL, however, we identified two additional areas. The first was located caudoventral to R with a frequency gradient in the ventrocaudal direction, which we named the medial anterolateral (MAL) area. The second was a small area with no obvious tonotopy (NT), positioned between the MAL and AL areas. Both the MAL and NT areas responded to a wide range of frequencies (at least 2-24 kHz). Our results suggest that the belt region caudoventral to R is more complex than previously proposed, and we thus call for a refinement of the current primate auditory cortex model.
Kale, Sushrut; Micheyl, Christophe; Heinz, Michael G
Listeners with sensorineural hearing loss (SNHL) often show poorer thresholds for fundamental-frequency (F0) discrimination and poorer discrimination between harmonic and frequency-shifted (inharmonic) complex tones, than normal-hearing (NH) listeners-especially when these tones contain resolved or partially resolved components. It has been suggested that these perceptual deficits reflect reduced access to temporal-fine-structure (TFS) information and could be due to degraded phase locking in the auditory nerve (AN) with SNHL. In the present study, TFS and temporal-envelope (ENV) cues in single AN-fiber responses to band-pass-filtered harmonic and inharmonic complex tones were -measured in chinchillas with either normal-hearing or noise-induced SNHL. The stimuli were comparable to those used in recent psychophysical studies of F0 and harmonic/inharmonic discrimination. As in those studies, the rank of the center component was manipulated to produce -different resolvability conditions, different phase relationships (cosine and random phase) were tested, and background noise was present. Neural TFS and ENV cues were quantified using cross-correlation coefficients computed using shuffled cross correlograms between neural responses to REF (harmonic) and TEST (F0- or frequency-shifted) stimuli. In animals with SNHL, AN-fiber tuning curves showed elevated thresholds, broadened tuning, best-frequency shifts, and downward shifts in the dominant TFS response component; however, no significant degradation in the ability of AN fibers to encode TFS or ENV cues was found. Consistent with optimal-observer analyses, the results indicate that TFS and ENV cues depended only on the relevant frequency shift in Hz and thus were not degraded because phase locking remained intact. These results suggest that perceptual "TFS-processing" deficits do not simply reflect degraded phase locking at the level of the AN. To the extent that performance in F0- and harmonic/inharmonic discrimination
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple sing...
Chermak, G D; Hall, J W; Musiek, F E
Children diagnosed with attention deficit hyperactivity disorder (ADHD) frequently present difficulties performing tasks that challenge the central auditory nervous system. The relationship between ADHD and central auditory processing disorder (CAPD) is examined from the perspectives of cognitive neuroscience, audiology, and neuropsychology. The accumulating evidence provides a basis for the overlapping clinical profiles yet differentiates CAPD and ADHD as clinically distinct entities. Common and distinctive management strategies are outlined.
R. F. Lyon
Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as Ã¢Â€Âœdesign curvesÃ¢Â€Â for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a Ã¢Â€Âœmissing linkÃ¢Â€Â between physiological, electrical, and mechanical models for auditory filtering.
Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A
The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...
Brenneman, Lauren; Cash, Elizabeth; Chermak, Gail D; Guenette, Linda; Masters, Gay; Musiek, Frank E; Brown, Mallory; Ceruti, Julianne; Fitzegerald, Krista; Geissler, Kristin; Gonzalez, Jennifer; Weihing, Jeffrey
Pediatric central auditory processing disorder (CAPD) is frequently comorbid with other childhood disorders. However, few studies have examined the relationship between commonly used CAPD, language, and cognition tests within the same sample. The present study examined the relationship between diagnostic CAPD tests and "gold standard" measures of language and cognitive ability, the Clinical Evaluation of Language Fundamentals (CELF) and the Wechsler Intelligence Scale for Children (WISC). A retrospective study. Twenty-seven patients referred for CAPD testing who scored average or better on the CELF and low average or better on the WISC were initially included. Seven children who scored below the CELF and/or WISC inclusion criteria were then added to the dataset for a second analysis, yielding a sample size of 34. Participants were administered a CAPD battery that included at least the following three CAPD tests: Frequency Patterns (FP), Dichotic Digits (DD), and Competing Sentences (CS). In addition, they were administered the CELF and WISC. Relationships between scores on CAPD, language (CELF), and cognition (WISC) tests were examined using correlation analysis. DD and FP showed significant correlations with Full Scale Intelligence Quotient, and the DD left ear and the DD interaural difference measures both showed significant correlations with working memory. However, ∼80% or more of the variance in these CAPD tests was unexplained by language and cognition measures. Language and cognition measures were more strongly correlated with each other than were the CAPD tests with any CELF or WISC scale. Additional correlations with the CAPD tests were revealed when patients who scored in the mild-moderate deficit range on the CELF and/or in the borderline low intellectual functioning range on the WISC were included in the analysis. While both the DD and FP tests showed significant correlations with one or more cognition measures, the majority of the variance in these
Liu, Ying; Hu, Huijing; Jones, Jeffery A; Guo, Zhiqiang; Li, Weifeng; Chen, Xi; Liu, Peng; Liu, Hanjun
Speakers rapidly adjust their ongoing vocal productions to compensate for errors they hear in their auditory feedback. It is currently unclear what role attention plays in these vocal compensations. This event-related potential (ERP) study examined the influence of selective and divided attention on the vocal and cortical responses to pitch errors heard in auditory feedback regarding ongoing vocalisations. During the production of a sustained vowel, participants briefly heard their vocal pitch shifted up two semitones while they actively attended to auditory or visual events (selective attention), or both auditory and visual events (divided attention), or were not told to attend to either modality (control condition). The behavioral results showed that attending to the pitch perturbations elicited larger vocal compensations than attending to the visual stimuli. Moreover, ERPs were likewise sensitive to the attentional manipulations: P2 responses to pitch perturbations were larger when participants attended to the auditory stimuli compared to when they attended to the visual stimuli, and compared to when they were not explicitly told to attend to either the visual or auditory stimuli. By contrast, dividing attention between the auditory and visual modalities caused suppressed P2 responses relative to all the other conditions and caused enhanced N1 responses relative to the control condition. These findings provide strong evidence for the influence of attention on the mechanisms underlying the auditory-vocal integration in the processing of pitch feedback errors. In addition, selective attention and divided attention appear to modulate the neurobehavioral processing of pitch feedback errors in different ways. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Full Text Available Cognitive task demands in one sensory modality (T1 can have beneficial effects on a secondary task (T2 in a different modality, due to reduced top-down control needed to inhibit the secondary task, as well as crossmodal spread of attention. This contrasts findings of cognitive load compromising a secondary modality's processing. We manipulated cognitive load within one modality (visual and studied the consequences of cognitive demands on secondary (auditory processing. 15 healthy participants underwent a simultaneous EEG-fMRI experiment. Data from 8 participants were obtained outside the scanner for validation purposes. The primary task (T1 was to respond to a visual working memory (WM task with four conditions, while the secondary task (T2 consisted of an auditory oddball stream, which participants were asked to ignore. The fMRI results revealed fronto-parietal WM network activations in response to T1 task manipulation. This was accompanied by significantly higher reaction times and lower hit rates with increasing task difficulty which confirmed successful manipulation of WM load. Amplitudes of auditory evoked potentials, representing fundamental auditory processing showed a continuous augmentation which demonstrated a systematic relation to cross-modal cognitive load. With increasing WM load, primary auditory cortices were increasingly deactivated while psychophysiological interaction results suggested the emergence of auditory cortices connectivity with visual WM regions. These results suggest differential effects of crossmodal attention on fundamental auditory processing. We suggest a continuous allocation of resources to brain regions processing primary tasks when challenging the central executive under high cognitive load.
Full Text Available This study compared magnetoencephalographic (MEG imaging-derived indices of auditory and somatosensory cortical processing in children aged 8–12 years with autism spectrum disorder (ASD; N = 18, those with sensory processing dysfunction (SPD; N = 13 who do not meet ASD criteria, and typically developing control (TDC; N = 19 participants. The magnitude of responses to both auditory and tactile stimulation was comparable across all three groups; however, the M200 latency response from the left auditory cortex was significantly delayed in the ASD group relative to both the TDC and SPD groups, whereas the somatosensory response of the ASD group was only delayed relative to TDC participants. The SPD group did not significantly differ from either group in terms of somatosensory latency, suggesting that participants with SPD may have an intermediate phenotype between ASD and TDC with regard to somatosensory processing. For the ASD group, correlation analyses indicated that the left M200 latency delay was significantly associated with performance on the WISC-IV Verbal Comprehension Index as well as the DSTP Acoustic-Linguistic index. Further, these cortical auditory response delays were not associated with somatosensory cortical response delays or cognitive processing speed in the ASD group, suggesting that auditory delays in ASD are domain specific rather than associated with generalized processing delays. The specificity of these auditory delays to the ASD group, in addition to their correlation with verbal abilities, suggests that auditory sensory dysfunction may be implicated in communication symptoms in ASD, motivating further research aimed at understanding the impact of sensory dysfunction on the developing brain.
Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.
Full Text Available Background and Aim: Specific language impairment (SLI, one variety of developmental language disorder, has attracted much interest in recent decades. Much research has been conducted to discover why some children have a specific language impairment. So far, research has failed to identify a reason for this linguistic deficiency. Some researchers believe language disorder causes defects in phonological working memory and affects auditory processing speed. Therefore, this study reviews the results of research investigating these two factors in children with specific language impairment.Recent Findings: Studies have shown that children with specific language impairment face constraints in phonological working memory capacity. Memory deficit is one possible cause of linguistic disorder in children with specific language impairment. However, in these children, disorder in information processing speed is observed, especially regarding the auditory aspect.Conclusion: Much more research is required to adequately explain the relationship between phonological working memory and auditory processing speed with language. However, given the role of phonological working memory and auditory processing speed in language acquisition, a focus should be placed on phonological working memory capacity and auditory processing speed in the assessment and treatment of children with a specific language impairment.
Bruneau, Nicole; Bidet-Caulet, Aurélie; Roux, Sylvie; Bonnet-Brilhault, Frédérique; Gomot, Marie
To investigate brain asymmetry of the temporal auditory evoked potentials (T-complex) in response to monaural stimulation in children compared to adults. Ten children (7 to 9 years) and ten young adults participated in the study. All were right-handed. The auditory stimuli used were tones (1100 Hz, 70 dB SPL, 50 ms duration) delivered monaurally (right, left ear) at four different levels of stimulus onset asynchrony (700-1100-1500-3000 ms). Latency and amplitude of responses were measured at left and right temporal sites according to the ear stimulated. Peaks of the three successive deflections (Na-Ta-Tb) of the T-complex were greater in amplitude and better defined in children than in adults. Amplitude measurements in children indicated that Na culminates on the left hemisphere whatever the ear stimulated whereas Ta and Tb culminate on the right hemisphere but for left ear stimuli only. Peak latency displayed different patterns of asymmetry. Na and Ta displayed shorter latencies for contralateral stimulation. The original finding was that Tb peak latency was the shortest at the left temporal site for right ear stimulation in children. Amplitude increased and/or peak latency decreased with increasing SOA, however no interaction effect was found with recording site or with ear stimulated. Our main original result indicates a right ear-left hemisphere timing advantage for Tb peak in children. The Tb peak would therefore be a good candidate as an electrophysiological marker of ear advantage effects during dichotic stimulation and of functional inter-hemisphere interactions and connectivity in children. Copyright © 2014. Published by Elsevier B.V.
Grant, Ken W.; van Wassenhove, Virginie
Auditory-visual speech perception has been shown repeatedly to be both more accurate and more robust than auditory speech perception. Attempts to explain these phenomena usually treat acoustic and visual speech information (i.e., accessed via speechreading) as though they were derived from independent processes. Recent electrophysiological (EEG) studies, however, suggest that visual speech processes may play a fundamental role in modulating the way we hear. For example, both the timing and amplitude of auditory-specific event-related potentials as recorded by EEG are systematically altered when speech stimuli are presented audiovisually as opposed to auditorilly. In addition, the detection of a speech signal in noise is more readily accomplished when accompanied by video images of the speaker's production, suggesting that the influence of vision on audition occurs quite early in the perception process. But the impact of visual cues on what we ultimately hear is not limited to speech. Our perceptions of loudness, timbre, and sound source location can also be influenced by visual cues. Thus, for speech and nonspeech stimuli alike, predicting a listener's response to sound based on acoustic engineering principles alone may be misleading. Examples of acoustic-visual interactions will be presented which highlight the multisensory nature of our hearing experience.
He, Shuman; Abbas, Paul J; Doyle, Danielle V; McFayden, Tyler C; Mulherin, Stephen
This study aimed to (1) characterize temporal response properties of the auditory nerve in implanted children with auditory neuropathy spectrum disorder (ANSD), and (2) compare results recorded in implanted children with ANSD with those measured in implanted children with sensorineural hearing loss (SNHL). Participants included 28 children with ANSD and 29 children with SNHL. All subjects used cochlear nucleus devices in their test ears. Both ears were tested in 6 children with ANSD and 3 children with SNHL. For all other subjects, only one ear was tested. The electrically evoked compound action potential (ECAP) was measured in response to each of the 33 pulses in a pulse train (excluding the second pulse) for one apical, one middle-array, and one basal electrode. The pulse train was presented in a monopolar-coupled stimulation mode at 4 pulse rates: 500, 900, 1800, and 2400 pulses per second. Response metrics included the averaged amplitude, latencies of response components and response width, the alternating depth and the amount of neural adaptation. These dependent variables were quantified based on the last six ECAPs or the six ECAPs occurring within a time window centered around 11 to 12 msec. A generalized linear mixed model was used to compare these dependent variables between the 2 subject groups. The slope of the linear fit of the normalized ECAP amplitudes (re. amplitude of the first ECAP response) over the duration of the pulse train was used to quantify the amount of ECAP increment over time for a subgroup of 9 subjects. Pulse train-evoked ECAPs were measured in all but 8 subjects (5 with ANSD and 3 with SNHL). ECAPs measured in children with ANSD had smaller amplitude, longer averaged P2 latency and greater response width than children with SNHL. However, differences in these two groups were only observed for some electrodes. No differences in averaged N1 latency or in the alternating depth were observed between children with ANSD and children with
Ciriaco, Antonella; Russo, Angelo; Monzani, Daniele; Genovese, Elisabetta; Benincasa, Paola; Caffo, Ernesto; Pini, Luigi
Background Recently, an increasing number of articles have appeared on central auditory processing disorders, but in the literature there is only one study that evaluated the possible correlation between migraine in the critical phase and central auditory processing. The aim of our study was to assess the correlation between auditory processing information and childhood primary headaches in the intercritical phase. Methods This is an observational study. We enrolled 54 patients, 30 with prima...
Gurtubay, I G; Alegre, M; Valencia, M; Artieda, J
Perception is an active process in which our brains use top-down influences to modulate afferent information. To determine whether this modulation might be based on oscillatory activity, we asked seven subjects to detect a silence that appeared randomly in a rhythmic auditory sequence, counting the number of omissions ("count" task), or responding to each omission with a right index finger extension ("move" task). Despite the absence of physical stimuli, these tasks induced a 'non-phase-locked' gamma oscillation in temporal-parietal areas, providing evidence of intrinsically generated oscillatory activity during top-down processing. This oscillation is probably related to the local neural activation that takes place during the process of stimulus detection, involving the functional comparison between the tones and the absence of stimuli as well as the auditory echoic memory processes. The amplitude of the gamma oscillations was reduced with the repetition of the tasks. Moreover, it correlated positively with the number of correctly detected omissions and negatively with the reaction time. These findings indicate that these oscillations, like others described, may be modulated by attentional processes. In summary, our findings support the active and adaptive concept of brain function that has emerged over recent years, suggesting that the match of sensory information with memory contents generates gamma oscillations.
Full Text Available This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120 discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds.
Yanaga, Ryuichiro; Kawahara, Hideki
A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.
Full Text Available ABSTRACT Purpose: to compare the frequency of disfluencies and speech rate in spontaneous speech and reading in adults with and without stuttering in non-altered and delayed auditory feedback (NAF, DAF. Methods: participants were 30 adults: 15 with Stuttering (Research Group - RG, and 15 without stuttering (Control Group - CG. The procedures were: audiological assessment and speech fluency evaluation in two listening conditions, normal and delayed auditory feedback (100 milliseconds delayed by Fono Tools software. Results: the DAF caused a significant improvement in the fluency of spontaneous speech in RG when compared to speech under NAF. The effect of DAF was different in CG, because it increased the common disfluencies and the total of disfluencies in spontaneous speech and reading, besides showing an increase in the frequency of stuttering-like disfluencies in reading. The intergroup analysis showed significant differences in the two speech tasks for the two listening conditions in the frequency of stuttering-like disfluencies and in the total of disfluencies, and in the flows of syllable and word-per-minute in the NAF. Conclusion: the results demonstrated that delayed auditory feedback promoted fluency in spontaneous speech of adults who stutter, without interfering in the speech rate. In non-stuttering adults an increase occurred in the number of common disfluencies and total of disfluencies as well as reduction of speech rate in spontaneous speech and reading.
Full Text Available Abstract Introduction: It has been demonstrated that long-term Conductive Hearing Loss (CHL may influence the precise detection of the temporal features of acoustic signals or Auditory Temporal Processing (ATP. It can be argued that ATP may be the underlying component of many central auditory processing capabilities such as speech comprehension or sound localization. Little is known about the consequences of CHL on temporal aspects of central auditory processing. Objective: This study was designed to assess auditory temporal processing ability in individuals with chronic CHL. Methods: During this analytical cross-sectional study, 52 patients with mild to moderate chronic CHL and 52 normal-hearing listeners (control, aged between 18 and 45 year-old, were recruited. In order to evaluate auditory temporal processing, the Gaps-in-Noise (GIN test was used. The results obtained for each ear were analyzed based on the gap perception threshold and the percentage of correct responses. Results: The average of GIN thresholds was significantly smaller for the control group than for the CHL group for both ears (right: p = 0.004; left: p 0.05. Conclusion: The results suggest reduced auditory temporal processing ability in adults with CHL compared to normal hearing subjects. Therefore, developing a clinical protocol to evaluate auditory temporal processing in this population is recommended.
Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.
Iliadou, Vasiliki (Vivian); Ptok, Martin; Grech, Helen; Pedersen, Ellen Raben; Brechmann, André; Deggouj, Naïma; Kiese-Himmel, Christiane; Śliwińska-Kowalska, Mariola; Nickisch, Andreas; Demanez, Laurent; Veuillet, Evelyne; Thai-Van, Hung; Sirimanna, Tony; Callimachou, Marina; Santarelli, Rosamaria; Kuske, Sandra; Barajas, Jose; Hedjever, Mladen; Konukseven, Ozlem; Veraguth, Dorothy; Stokkereit Mattsson, Tone; Martins, Jorge Humberto; Bamiou, Doris-Eva
Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD) or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.
Vasiliki (Vivian Iliadou
Full Text Available Current notions of “hearing impairment,” as reflected in clinical audiological practice, do not acknowledge the needs of individuals who have normal hearing pure tone sensitivity but who experience auditory processing difficulties in everyday life that are indexed by reduced performance in other more sophisticated audiometric tests such as speech audiometry in noise or complex non-speech sound perception. This disorder, defined as “Auditory Processing Disorder” (APD or “Central Auditory Processing Disorder” is classified in the current tenth version of the International Classification of diseases as H93.25 and in the forthcoming beta eleventh version. APDs may have detrimental effects on the affected individual, with low esteem, anxiety, and depression, and symptoms may remain into adulthood. These disorders may interfere with learning per se and with communication, social, emotional, and academic-work aspects of life. The objective of the present paper is to define a baseline European APD consensus formulated by experienced clinicians and researchers in this specific field of human auditory science. A secondary aim is to identify issues that future research needs to address in order to further clarify the nature of APD and thus assist in optimum diagnosis and evidence-based management. This European consensus presents the main symptoms, conditions, and specific medical history elements that should lead to auditory processing evaluation. Consensus on definition of the disorder, optimum diagnostic pathway, and appropriate management are highlighted alongside a perspective on future research focus.
This article describes three management approaches that can be used with children with auditory processing difficulties and learning disabilities. These approaches were selected because they can be applied in a variety of settings by a variety of professionals, as well as interested parents. The vocabulary building procedure is one that potentially can increase the ability to learn new words but also can provide training on contextual derivation of information, which is key to auditory closure processes. This procedure also helps increase language base, which can also enhance closure abilities. Auditory memory enhancement is a simple technique that involves many complex brain processes. This procedure reduces detailed information to a more gestalt representation and also integrates the motor and spatial processes of the brain. This, in turn, more fully uses working memory and helps in formulization and recall of important concepts of the sensory input. Finally, several informal auditory training techniques are discussed that can be readily employed in the school or home setting. These auditory training techniques are those that are most relevant to the kinds of deficits most often observed in our clinic.
Poikonen, Hanna; Toiviainen, Petri; Tervaniemi, Mari
The neural responses to simple tones and short sound sequences have been studied extensively. However, in reality the sounds surrounding us are spectrally and temporally complex, dynamic and overlapping. Thus, research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation which, in addition to sensory responses, elicits vast cognitive and emotional processes in the brain. Here we show that the preattentive P50 response evoked by rapid increases in timbral brightness during continuous music is enhanced in dancers when compared to musicians and laymen. In dance, fast changes in brightness are often emphasized with a significant change in movement. In addition, the auditory N100 and P200 responses are suppressed and sped up in dancers, musicians and laymen when music is accompanied with a dance choreography. These results were obtained with a novel event-related potential (ERP) method for natural music. They suggest that we can begin studying the brain with long pieces of natural music using the ERP method of electroencephalography (EEG) as has already been done with functional magnetic resonance (fMRI), these two brain imaging methods complementing each other.
Maria Cristina Barros da Silva
Full Text Available OBJETIVO: avaliar o processamento auditivo (PA dos operadores de telemarketing quanto à decodificação auditiva. Método: foram avaliados 20 sujeitos com idade entre 18 e 35 anos, de ambos os gêneros , com jornada de trabalho de seis horas diárias, e até cinco anos de tempo de serviço na função, usuários de headset monoauricular e sem exposição prévia a ruído ocupacional. O grupo estudado apresenta limiares auditivos dentro dos padrões de normalidade, timpanometria tipo A e reflexos acústicos presentes. Foi aplicado um questionário com objetivo de colher dados quanto às queixas, hábitos e sensações auditivas e foram realizados os testes de processamento de fala filtrada, Random Gap Detection Test (RGDT e Masking Level Difference (MLD. RESULTADOS: a análise do estudo foi descritiva, por meio de porcentagem onde observou-se que todos os indivíduos (com idade média entre 20 e 32 anos apresentaram queixas características das desordens do processamento auditivo. Nos testes aplicados foram observadas 45% de alterações no RGDT e 25% no MLD, havendo uma associação entre os testes de MLD alterados e o perfil de atuação no trabalho. CONCLUSÃO: este estudo sugere que o profissional, operador de telemarketing pode apresentar desordens do processamento auditivo, com provável comprometimento da habilidade de interação binaural e resolução temporal as quais mostraram-se alteradas em considerável parte destes indivíduos.PURPOSE: to evaluate the auditory processing on telemarketing operators towards their auditory decodification. METHODS: there were evaluated 20 subjects from 18 to 35 years old, both genders, with six hours a day work journey, and until five years as an operator, users of monoauricular headsets and without previous exposition to occupational noise. This group shows auditory thresholds in normal pattern, type A timpanometry, and auditory reflect. A questionnaire was applied to collect some data related to
Christiansen, Thomas Ulrich; Dau, Torsten; Greenberg, Steven
Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up...... and the physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....
Hildebrandt, K Jannis; Benda, Jan; Hennig, R Matthias
Hearing in insects serves to gain information in the context of mate finding, predator avoidance or host localization. For these goals, the auditory pathways of insects represent the computational substrate for object recognition and localization. Before these higher level computations can be executed in more central parts of the nervous system, the signals need to be preprocessed in the auditory periphery. Here, we review peripheral preprocessing along four computational themes rather than discussing specific physiological mechanisms: (1) control of sensitivity by adaptation, (2) recoding of amplitude modulations of an acoustic signal into a labeled-line code (3) frequency processing and (4) conditioning for binaural processing. Along these lines, we review evidence for canonical computations carried out in the peripheral auditory pathway and show that despite the vast diversity of insect hearing, signal processing is governed by common computational motifs and principles.
Georg von Jonquieres
Full Text Available Canavan Disease (CD is a leukodystrophy caused by homozygous null mutations in the gene encoding aspartoacylase (ASPA. ASPA-deficiency is characterized by severe psychomotor retardation, and excessive levels of the ASPA substrate N-acetylaspartate (NAA. ASPA is an oligodendrocyte marker and it is believed that CD has a central etiology. However, ASPA is also expressed by Schwann cells and ASPA-deficiency in the periphery might therefore contribute to the complex CD pathology. In this study, we assessed peripheral and central auditory function in the AspalacZ/lacZ rodent model of CD using auditory brainstem response (ABR. Increased ABR thresholds and the virtual loss of waveform peaks 4 and 5 from AspalacZ/lacZ mice, indicated altered central auditory processing in mutant mice compared with Aspawt/wt controls and altered central auditory processing. Analysis of ABR latencies recorded from AspalacZ/lacZ mice revealed that the speed of nerve conduction was unchanged in the peripheral part of the auditory pathway, and impaired in the CNS. Histological analyses confirmed that ASPA was expressed in oligodendrocytes and Schwann cells of the auditory system. In keeping with our physiological results, the cellular organization of the cochlea, including the organ of Corti, was preserved and the spiral ganglion nerve fibres were normal in ASPA-deficient mice. In contrast, we detected substantial hypomyelination in the central auditory system of AspalacZ/lacZ mice. In summary, our data suggest that the lack of ASPA in the CNS is responsible for the observed hearing deficits, while ASPA-deficiency in the cochlear nerve fibres is tolerated both morphologically and functionally.
von Jonquieres, Georg; Froud, Kristina E; Klugmann, Claudia B; Wong, Ann C Y; Housley, Gary D; Klugmann, Matthias
Canavan Disease (CD) is a leukodystrophy caused by homozygous null mutations in the gene encoding aspartoacylase (ASPA). ASPA-deficiency is characterized by severe psychomotor retardation, and excessive levels of the ASPA substrate N-acetylaspartate (NAA). ASPA is an oligodendrocyte marker and it is believed that CD has a central etiology. However, ASPA is also expressed by Schwann cells and ASPA-deficiency in the periphery might therefore contribute to the complex CD pathology. In this study, we assessed peripheral and central auditory function in the AspalacZ/lacZ rodent model of CD using auditory brainstem response (ABR). Increased ABR thresholds and the virtual loss of waveform peaks 4 and 5 from AspalacZ/lacZ mice, indicated altered central auditory processing in mutant mice compared with Aspawt/wt controls and altered central auditory processing. Analysis of ABR latencies recorded from AspalacZ/lacZ mice revealed that the speed of nerve conduction was unchanged in the peripheral part of the auditory pathway, and impaired in the CNS. Histological analyses confirmed that ASPA was expressed in oligodendrocytes and Schwann cells of the auditory system. In keeping with our physiological results, the cellular organization of the cochlea, including the organ of Corti, was preserved and the spiral ganglion nerve fibres were normal in ASPA-deficient mice. In contrast, we detected substantial hypomyelination in the central auditory system of AspalacZ/lacZ mice. In summary, our data suggest that the lack of ASPA in the CNS is responsible for the observed hearing deficits, while ASPA-deficiency in the cochlear nerve fibres is tolerated both morphologically and functionally.
Guzzetta, Francesco; Conti, Guido; Mercuri, Eugenio
Increasing attention has been devoted to the maturation of sensory processing in the first year of life. While the development of cortical visual function has been thoroughly studied, much less information is available on auditory processing and its early disorders. The aim of this paper is to provide an overview of the assessment techniques for…
Hakvoort, B.; van der Leij, A.; Maurits, N.; Maassen, B.; van Zuijen, T.L.
Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART),
Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L.
Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART),
McFarland, Dennis J.; Cacace, Anthony T.
This paper examines the case for modality specificity as a criterion for improving the specificity of diagnosing central auditory processing disorders. Demonstrating the modality-specific nature of sensory processing deficits is seen as one way to rule out nonperceptual factors as explanations for observed dysfunction. (Author)
Pablo E Jercog
Full Text Available Low-frequency sound localization depends on the neural computation of interaural time differences (ITD and relies on neurons in the auditory brain stem that integrate synaptic inputs delivered by the ipsi- and contralateral auditory pathways that start at the two ears. The first auditory neurons that respond selectively to ITD are found in the medial superior olivary nucleus (MSO. We identified a new mechanism for ITD coding using a brain slice preparation that preserves the binaural inputs to the MSO. There was an internal latency difference for the two excitatory pathways that would, if left uncompensated, position the ITD response function too far outside the physiological range to be useful for estimating ITD. We demonstrate, and support using a biophysically based computational model, that a bilateral asymmetry in excitatory post-synaptic potential (EPSP slopes provides a robust compensatory delay mechanism due to differential activation of low threshold potassium conductance on these inputs and permits MSO neurons to encode physiological ITDs. We suggest, more generally, that the dependence of spike probability on rate of depolarization, as in these auditory neurons, provides a mechanism for temporal order discrimination between EPSPs.
Schormans, Ashley L.; Scott, Kaela E.; Vo, Albert M. Q.; Tyker, Anna; Typlt, Marei; Stolzberg, Daniel; Allman, Brian L.
Extensive research on humans has improved our understanding of how the brain integrates information from our different senses, and has begun to uncover the brain regions and large-scale neural activity that contributes to an observer’s ability to perceive the relative timing of auditory and visual stimuli. In the present study, we developed the first behavioral tasks to assess the perception of audiovisual temporal synchrony in rats. Modeled after the parameters used in human studies, separate groups of rats were trained to perform: (1) a simultaneity judgment task in which they reported whether audiovisual stimuli at various stimulus onset asynchronies (SOAs) were presented simultaneously or not; and (2) a temporal order judgment task in which they reported whether they perceived the auditory or visual stimulus to have been presented first. Furthermore, using in vivo electrophysiological recordings in the lateral extrastriate visual (V2L) cortex of anesthetized rats, we performed the first investigation of how neurons in the rat multisensory cortex integrate audiovisual stimuli presented at different SOAs. As predicted, rats (n = 7) trained to perform the simultaneity judgment task could accurately (~80%) identify synchronous vs. asynchronous (200 ms SOA) trials. Moreover, the rats judged trials at 10 ms SOA to be synchronous, whereas the majority (~70%) of trials at 100 ms SOA were perceived to be asynchronous. During the temporal order judgment task, rats (n = 7) perceived the synchronous audiovisual stimuli to be “visual first” for ~52% of the trials, and calculation of the smallest timing interval between the auditory and visual stimuli that could be detected in each rat (i.e., the just noticeable difference (JND)) ranged from 77 ms to 122 ms. Neurons in the rat V2L cortex were sensitive to the timing of audiovisual stimuli, such that spiking activity was greatest during trials when the visual stimulus preceded the auditory by 20–40 ms. Ultimately
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Edwards, Erik; Chang, Edward F
Given recent interest in syllabic rates (∼2-5 Hz) for speech processing, we review the perception of "fluctuation" range (∼1-10 Hz) modulations during listening to speech and technical auditory stimuli (AM and FM tones and noises, and ripple sounds). We find evidence that the temporal modulation transfer function (TMTF) of human auditory perception is not simply low-pass in nature, but rather exhibits a peak in sensitivity in the syllabic range (∼2-5 Hz). We also address human and animal neurophysiological evidence, and argue that this bandpass tuning arises at the thalamocortical level and is more associated with non-primary regions than primary regions of cortex. The bandpass rather than low-pass TMTF has implications for modeling auditory central physiology and speech processing: this implicates temporal contrast rather than simple temporal integration, with contrast enhancement for dynamic stimuli in the fluctuation range. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.
Chen, Xianming; Wang, Maoxin; Deng, Yihong; Liang, Yonghui; Li, Jianzhong; Chen, Shiyan
Contralateral temporal lobe activation decreases with aging, regardless of hearing status, with elderly individuals showing reduced right ear advantage. Aging and hearing loss possibly lead to presbycusis speech discrimination decline. To evaluate presbycusis patients' auditory cortex activation under verbal stimulation. Thirty-six patients were enrolled: 10 presbycusis patients (mean age = 64 years, range = 60-70), 10 in the healthy aged group (mean age = 66 years, range = 60-70), and 16 young healthy volunteers (mean age = 25 years, range = 23-28). These three groups underwent simultaneous 1 kHz and 90 dB single-syllable word stimuli and (blood-oxygen-level-dependent functional magnetic resonance imaging) BOLD fMRI examinations. The main activation regions were superior temporal and middle temporal gyrus. For all aged subjects, the right region of interest (ROI) activation volume was decreased compared with the young group. With left ear stimulation, bilateral ROI activation intensity held. With right ear stimulation, the aged group's activation intensity was higher. Using monaural stimulation in the young group, contralateral temporal lobe activation volume and intensity were higher vs ipsilateral, while they were lower in the aged and presbycusis groups. On left and right ear auditory tasks, the young group showed right ear advantage, while the aged and presbycusis groups showed reduced right ear advantage.
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. © The Author(s) 2016.
Alexandra Annemarie Ludwig
Full Text Available Studies on the maturation of auditory motion processing in children have yielded inconsistent reports. The present study combines subjective and objective measurements to investigate how the auditory perceptual abilities of children change during development and whether these changes are paralleled by changes in the event-related brain potential (ERP.We employed the mismatch negativity (MMN to determine maturational changes in the discrimination of interaural time differences (ITD that generate lateralized moving auditory percepts. MMNs were elicited in children, teenagers, and adults, using a small and a large ITD at stimulus offset with respect to each subject’s discrimination threshold. In adults and teenagers large deviants elicited prominent MMNs, whereas small deviants at the behavioral threshold elicited only a marginal or no MMN. In contrast, pronounced MMNs for both deviant sizes were found in children. Behaviourally, however, most of the children showed higher discrimination thresholds than teens and adults.Although automatic ITD detection is functional, active discrimination is still limited in children. The lack of MMN deviance dependency in children suggests that unlike in teenagers and adults, neural signatures of automatic auditory motion processing do not mirror discrimination abilities.The study critically accounts for advanced understanding of children’s central auditory development.
Carney, Laurel H.; Shi, Lufeng; Doherty, Karen A.
Changes in gain associated with the basilar membrane compressive nonlinearity are accompanied by changes in the bandwidth of tuning. Filters with level-dependent bandwidth have level-dependent phase properties. These phase properties result in level-dependent timing of sustained phase-locked responses of auditory-nerve (AN) fibers at low frequencies and level-dependent latencies at high frequencies, where phase-locking rolls off. In the healthy ear, level-dependent temporal aspects of AN responses carry information about stimulus level and spectral properties. Loss of compression with hearing impairment thus results not only in a reduction of amplification, but also in distortion of the temporal response pattern of the AN. The temporal aspects of compression suggest that signal-processing schemes that attempt to correct sounds, or restore normal spatio-temporal response patterns, should include dynamic level-dependent phase shifts. A nonlinear signal-processing scheme will be presented which includes dynamic frequency- and level-dependent phase shifts, based on physiological models of the temporal response properties of AN fibers. Preliminary testing measured listeners preferences for sentences and intelligibility of vowel-consonant syllables with different degrees of nonlinear processing. Hearing-impaired listeners tended to prefer the dynamically corrected stimuli based on improved clarity. Correction also improved intelligibility for some phonemes. [Work supported by NIDCD R21-006057.
Gavin M. Bidelman
Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Bidelman, Gavin M.
Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294
Prather, Jonathan F
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013
Guerreiro, Maria J S; Eck, Judith; Moerel, Michelle; Evers, Elisabeth A T; Van Gerven, Pascal W M
Age-related cognitive decline has been accounted for by an age-related deficit in top-down attentional modulation of sensory cortical processing. In light of recent behavioral findings showing that age-related differences in selective attention are modality dependent, our goal was to investigate the role of sensory modality in age-related differences in top-down modulation of sensory cortical processing. This question was addressed by testing younger and older individuals in several memory tasks while undergoing fMRI. Throughout these tasks, perceptual features were kept constant while attentional instructions were varied, allowing us to devise all combinations of relevant and irrelevant, visual and auditory information. We found no top-down modulation of auditory sensory cortical processing in either age group. In contrast, we found top-down modulation of visual cortical processing in both age groups, and this effect did not differ between age groups. That is, older adults enhanced cortical processing of relevant visual information and suppressed cortical processing of visual distractors during auditory attention to the same extent as younger adults. The present results indicate that older adults are capable of suppressing irrelevant visual information in the context of cross-modal auditory attention, and thereby challenge the view that age-related attentional and cognitive decline is due to a general deficits in the ability to suppress irrelevant information. Copyright © 2014 Elsevier B.V. All rights reserved.
Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva
Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the
Bigliassi, Marcelo; Karageorghis, Costas I; Nowicky, Alexander V; Wright, Michael J; Orgs, Guido
Highly demanding cognitive-motor tasks can be negatively influenced by the presence of auditory stimuli. The human brain attempts to partially suppress the processing of potential distractors in order that motor tasks can be completed successfully. The present study sought to further understand the attentional neural systems that activate in response to potential distractors during the execution of movements. Nineteen participants (9 women and 10 men) were administered isometric ankle-dorsiflexion tasks for 10 s at a light intensity. Electroencephalography was used to assess the electrical activity in the brain, and a music excerpt was used to distract participants. Three conditions were administered: auditory distraction during the execution of movement (auditory distraction; AD), movement execution in the absence of auditory distraction (control; CO), and auditory distraction in the absence of movement (stimulus-only; SO). AD was compared with SO to identify the mechanisms underlying the attentional processing associated with attentional shifts from internal association (task-related) to external (task-unrelated) sensory cues. The results of the present study indicated that the EMG amplitude was not compromised when the auditory stimulus was administered. Accordingly, EEG activity was upregulated at 0.368 s in AD when compared to SO. Source reconstruction analysis indicated that right and central parietal regions of the cortex activated at 0.368 s in order to reduce the processing of task-irrelevant stimuli during the execution of movements. The brain mechanisms that underlie the control of potential distractors during exercise were possibly associated with the activity of the frontoparietal network.
Iliadou, Vasiliki; Bamiou, Doris Eva
Purpose: To investigate the clinical utility of the Children's Auditory Processing Performance Scale (CHAPPS; Smoski, Brunt, & Tannahill, 1992) to evaluate listening ability in 12-year-old children referred for auditory processing assessment. Method: This was a prospective case control study of 97 children (age range = 11;4 [years;months] to…
Cosma, I.; Popescu, D. I.
For hearing sense, the mechanoreceptors fire action potentials when their membranes are physically stretched. Based on the statistical physics, we analyzed the entropical aspects in auditory processes of hearing. We develop a model that connects the logarithm of relative intensity of sound (loudness) to the level of energy disorder within the system of cellular sensory system. The increasing of entropy and disorder in the system is connected to the free energy available to signal the production of action potentials in inner hair cells of the vestibulocochlear auditory organ.
Full Text Available In nature, communication sounds among animal species including humans are typical complex sounds that occur in sequence and vary with time in several parameters including amplitude, frequency, duration as well as separation and order of individual sounds. Among these multiple parameters, sound duration is a simple but important one that contributes to the distinct spectral and temporal attributes of individual biological sounds. Likewise, the separation of individual sounds is an important temporal attribute that determines an animal’s ability in distinguishing individual sounds. Whereas duration selectivity of auditory neurons underlies an animal’s ability in recognition of sound duration, the recovery cycle of auditory neurons determines a neuron’s ability in responding to closely spaced sound pulses and therefore it underlies the animal’s ability in analyzing the order of individual sounds. Since the multiple parameters of naturally occurring communication sounds vary with time, the analysis of a specific sound parameter by an animal would be inevitably affected by other co-varying sound parameters. This is particularly obvious in insectivorous bats which rely on analysis of returning echoes for prey capture when they systematically vary the multiple pulse parameters throughout a target approach sequence. In this review article, we present our studies of dynamic variation of duration selectivity and recovery cycle of neurons in the central nucleus of the inferior colliculus of the frequency-modulated bats to highlight the dynamic temporal signal processing of central auditory neurons. These studies use single pulses and three biologically relevant pulse-echo (P-E pairs with varied duration, gap and amplitude difference similar to that occurring during search, approach and terminal phases of hunting by bats. These studies show that most collicular neurons respond maximally to a best tuned sound duration (BD. The sound to which these
Bigand, Emmanuel; Delbé, Charles; Gérard, Yannick; Tillmann, Barbara
The present study investigated the minimum amount of auditory stimulation that allows differentiation of spoken voices, instrumental music, and environmental sounds. Three new findings were reported. 1) All stimuli were categorized above chance level with 50 ms-segments. 2) When a peak-level normalization was applied, music and voices started to be accurately categorized with 20 ms-segments. When the root-mean-square (RMS) energy of the stimuli was equalized, voice stimuli were better recognized than music and environmental sounds. 3) Further psychoacoustical analyses suggest that the categorization of extremely brief auditory stimuli depends on the variability of their spectral envelope in the used set. These last two findings challenge the interpretation of the voice superiority effect reported in previously published studies and propose a more parsimonious interpretation in terms of an emerging property of auditory categorization processes. PMID:22046436
Maria Luisa eLorusso
Full Text Available The nature of Rapid Auditory Processing (RAP deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of nonverbal tone sequences.Participants were 46 children aged 8 - 14 (26 with dyslexia, subdivided according to age, presence of a previous language delay, and type of dyslexia. Experimental tasks were a Temporal Order Judgment (TOJ (manipulating tone length, ISI and sequence length, and a Pattern Discrimination Task. Dyslexic children showed general RAP deficits. Tone length and ISI influenced dyslexic and control children’s performance in a similar way, but dyslexic children were more affected by an increase from 2 to 5 sounds. As to age, older dyslexic children’s difficulty in reproducing sequences of 4 and 5 tones was similar to that of normally reading younger (but not older children. In the analysis of subgroup profiles, the crucial variable appears to be the advantage, or lack thereof, in processing long vs short sounds. Dyslexic children with a previous language delay obtained the lowest scores in RAP measures, but they performed worse with shorter stimuli, similar to control children, while dyslexic-only children showed no advantage for longer stimuli. As to dyslexia subtype, only surface dyslexics improved their performance with longer stimuli, while phonological dyslexics did not. Differential scores for short vs long tones and for long vs short ISIs predict nonword and word reading, respectively, and the former correlate with phonemic awareness.In conclusion, the relationship between nonverbal RAP, phonemic skills and reading abilities appears to be characterized by complex interactions with
Lorusso, Maria Luisa; Cantiani, Chiara; Molteni, Massimo
The nature of Rapid Auditory Processing (RAP) deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of non-verbal tone sequences. Participants were 46 children aged 8-14 (26 with dyslexia, subdivided according to age, presence of a previous language delay, and type of dyslexia). Experimental tasks were a Temporal Order Judgment (TOJ) (manipulating tone length, ISI and sequence length), and a Pattern Discrimination Task. Dyslexic children showed general RAP deficits. Tone length and ISI influenced dyslexic and control children's performance in a similar way, but dyslexic children were more affected by an increase from 2 to 5 sounds. As to age, older dyslexic children's difficulty in reproducing sequences of 4 and 5 tones was similar to that of normally reading younger (but not older) children. In the analysis of subgroup profiles, the crucial variable appears to be the advantage, or lack thereof, in processing long vs. short sounds. Dyslexic children with a previous language delay obtained the lowest scores in RAP measures, but they performed worse with shorter stimuli, similar to control children, while dyslexic-only children showed no advantage for longer stimuli. As to dyslexia subtype, only surface dyslexics improved their performance with longer stimuli, while phonological dyslexics did not. Differential scores for short vs. long tones and for long vs. short ISIs predict non-word and word reading, respectively, and the former correlate with phonemic awareness. In conclusion, the relationship between non-verbal RAP, phonemic skills and reading abilities appears to be characterized by complex interactions with subgroup
Charles R Larson
Full Text Available The pitch-shift paradigm has become a widely used method for studying the role of voice pitch auditory feedback in voice control. This paradigm introduces small, brief pitch shifts in voice auditory feedback to vocalizing subjects. The perturbations trigger a reflexive mechanism that counteracts the change in pitch. The underlying mechanisms of the vocal responses are thought to reflect a negative feedback control system that is similar to constructs developed to explain other forms of motor control. Another use of this technique requires subjects to voluntarily change the pitch of their voice when they hear a pitch shift stimulus. Under these conditions, short latency responses are produced that change voice pitch to match that of the stimulus. The pitch-shift technique has been used with magnetoencephalography (MEG and electroencephalography (EEG recordings, and has shown that at vocal onset there is normally a suppression of neural activity related to vocalization. However, if a pitch-shift is also presented at voice onset, there is a cancellation of this suppression, which has been interpreted to mean that one way in which a person distinguishes self-vocalization from vocalization of others is by a comparison of the intended voice and the actual voice. Studies of the pitch shift reflex in the fMRI environment show that the superior temporal gyrus (STG plays an important role in the process of controlling voice F0 based on auditory feedback. Additional studies using fMRI for effective connectivity modeling show that the left and right STG play critical roles in correcting for an error in voice production. While both the left and right STG are involved in this process, a feedback loop develops between left and right STG during perturbations, in which the left to right connection becomes stronger, and a new negative right to left connection emerges along with the emergence of other feedback loops within the cortical network tested.
Zhao, T Christina; Kuhl, Patricia K
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing.
Zhao, T. Christina; Kuhl, Patricia K.
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512
Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.
Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties,…
Alain, Claude; Tremblay, Kelly
The perception of complex acoustic signals such as speech and music depends on the interaction between peripheral and central auditory processing. As information travels from the cochlea to primary and associative auditory cortices, the incoming sound is subjected to increasingly more detailed and refined analysis. These various levels of analyses are thought to include low-level automatic processes that detect, discriminate and group sounds that are similar in physical attributes such as frequency, intensity, and location as well as higher-level schema-driven processes that reflect listeners' experience and knowledge of the auditory environment. In this review, we describe studies that have used event-related brain potentials in investigating the processing of complex acoustic signals (e.g., speech, music). In particular, we examine the role of hearing loss on the neural representation of sound and how cognitive factors and learning can help compensate for perceptual difficulties. The notion of auditory scene analysis is used as a conceptual framework for interpreting and studying the perception of sound.
Wallach, Geraldine P.
Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…
The study investigated the processing of sound motion, employing a psychophysical motion discrimination task in combination with electroencephalography. Following stationary auditory stimulation from a central space position, the onset of left- and rightward motion elicited a specific cortical response that was lateralized to the hemisphere…
Nguyen-Hoan, Minh; Taft, Marcus
For bilinguals born in an English-speaking country or who arrive at a young age, English (L2) often becomes their dominant language by adulthood. This study examines whether such adult bilinguals show equivalent performance to monolingual English native speakers on three English auditory processing tasks: phonemic awareness, spelling-to-dictation…
Morell, Robert J.; Brewer, Carmen C.; Ge, Dongliang; Snieder, Harold; Zalewski, Christopher K.; King, Kelly A.; Drayna, Dennis; Friedman, Thomas B.
We administered tests commonly used in the diagnosis of auditory processing disorders (APDs) to twins recruited from the general population. We observed significant correlations in test scores between co-twins. Our analyses of test score correlations among 106 MZ and 33 DZ twin pairs indicate that
Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Rosen, Stuart
Purpose: To examine the impact of language background and language-related disorders (LRDs--dyslexia and/or language impairment) on performance in English speech and nonspeech tests of auditory processing (AP) commonly used in the clinic. Method: A clinical database concerning 133 multilingual children (mostly with English as an additional…
Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…
Jimenez-Fernandez, Angel; Cerezuela-Escudero, Elena; Miro-Amarante, Lourdes; Dominguez-Moralse, Manuel Jesus; de Asis Gomez-Rodriguez, Francisco; Linares-Barranco, Alejandro; Jimenez-Moreno, Gabriel
This paper presents a new architecture, design flow, and field-programmable gate array (FPGA) implementation analysis of a neuromorphic binaural auditory sensor, designed completely in the spike domain. Unlike digital cochleae that decompose audio signals using classical digital signal processing techniques, the model presented in this paper processes information directly encoded as spikes using pulse frequency modulation and provides a set of frequency-decomposed audio information using an address-event representation interface. In this case, a systematic approach to design led to a generic process for building, tuning, and implementing audio frequency decomposers with different features, facilitating synthesis with custom features. This allows researchers to implement their own parameterized neuromorphic auditory systems in a low-cost FPGA in order to study the audio processing and learning activity that takes place in the brain. In this paper, we present a 64-channel binaural neuromorphic auditory system implemented in a Virtex-5 FPGA using a commercial development board. The system was excited with a diverse set of audio signals in order to analyze its response and characterize its features. The neuromorphic auditory system response times and frequencies are reported. The experimental results of the proposed system implementation with 64-channel stereo are: a frequency range between 9.6 Hz and 14.6 kHz (adjustable), a maximum output event rate of 2.19 Mevents/s, a power consumption of 29.7 mW, the slices requirements of 11141, and a system clock frequency of 27 MHz.
Chung, Kevin K. H.; McBride-Chang, Catherine; Cheung, Him; Wong, Simpson W. L.
This study focused on the associations of general auditory processing, speech perception, phonological awareness and word reading in Cantonese-speaking children from Hong Kong learning to read both Chinese (first language [L1]) and English (second language [L2]). Children in Grades 2--4 ("N" = 133) participated and were administered…
textabstractIn this thesis, different aspects of central auditory processing in the inferior colliculus (IC) of young-adult mice and rats are described. With the “in vivo patch-clamp” technique we investigated the contribution of membrane properties and synaptic integration of excitatory and
Key, Alexandra P.; Gustafson, Samantha J.; Rentmeester, Lindsey; Hornsby, Benjamin W. Y.; Bess, Fred H.
Purpose: Fatigue related to speech processing is an understudied area that may have significant negative effects, especially in children who spend the majority of their school days listening to classroom instruction. Method: This study examined the feasibility of using auditory P300 responses and behavioral indices (lapses of attention and…
Demopoulos, Carly; Hopkins, Joyce; Kopald, Brandon E; Paulson, Kim; Doyle, Lauren; Andrews, Whitney E; Lewine, Jeffrey David
The primary aim of this study was to examine whether there is an association between magnetoencephalography-based (MEG) indices of basic cortical auditory processing and vocal affect recognition (VAR) ability in individuals with autism spectrum disorder (ASD). MEG data were collected from 25 children/adolescents with ASD and 12 control participants using a paired-tone paradigm to measure quality of auditory physiology, sensory gating, and rapid auditory processing. Group differences were examined in auditory processing and vocal affect recognition ability. The relationship between differences in auditory processing and vocal affect recognition deficits was examined in the ASD group. Replicating prior studies, participants with ASD showed longer M1n latencies and impaired rapid processing compared with control participants. These variables were significantly related to VAR, with the linear combination of auditory processing variables accounting for approximately 30% of the variability after controlling for age and language skills in participants with ASD. VAR deficits in ASD are typically interpreted as part of a core, higher order dysfunction of the "social brain"; however, these results suggest they also may reflect basic deficits in auditory processing that compromise the extraction of socially relevant cues from the auditory environment. As such, they also suggest that therapeutic targeting of sensory dysfunction in ASD may have additional positive implications for other functional deficits. (c) 2015 APA, all rights reserved).
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Centanni, T M; Booker, A B; Sloan, A M; Chen, F; Maher, B J; Carraway, R S; Khodaparast, N; Rennaker, R; LoTurco, J J; Kilgard, M P
One in 15 school age children have dyslexia, which is characterized by phoneme-processing problems and difficulty learning to read. Dyslexia is associated with mutations in the gene KIAA0319. It is not known whether reduced expression of KIAA0319 can degrade the brain's ability to process phonemes. In the current study, we used RNA interference (RNAi) to reduce expression of Kiaa0319 (the rat homolog of the human gene KIAA0319) and evaluate the effect in a rat model of phoneme discrimination. Speech discrimination thresholds in normal rats are nearly identical to human thresholds. We recorded multiunit neural responses to isolated speech sounds in primary auditory cortex (A1) of rats that received in utero RNAi of Kiaa0319. Reduced expression of Kiaa0319 increased the trial-by-trial variability of speech responses and reduced the neural discrimination ability of speech sounds. Intracellular recordings from affected neurons revealed that reduced expression of Kiaa0319 increased neural excitability and input resistance. These results provide the first evidence that decreased expression of the dyslexia-associated gene Kiaa0319 can alter cortical responses and impair phoneme processing in auditory cortex. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: email@example.com.
Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula
Hypersensitivity to sounds is one of the prevalent symptoms in individuals with Fragile X syndrome (FXS). It manifests behaviorally early during development and is often used as a landmark for treatment efficacy. However, the physiological mechanisms and circuit-level alterations underlying this aberrant behavior remain poorly understood. Using the mouse model of FXS ( Fmr1 KO ), we demonstrate that functional maturation of auditory brainstem synapses is impaired in FXS. Fmr1 KO mice showed a greatly enhanced excitatory synaptic input strength in neurons of the lateral superior olive (LSO), a prominent auditory brainstem nucleus, which integrates ipsilateral excitation and contralateral inhibition to compute interaural level differences. Conversely, the glycinergic, inhibitory input properties remained unaffected. The enhanced excitation was the result of an increased number of cochlear nucleus fibers converging onto one LSO neuron, without changing individual synapse properties. Concomitantly, immunolabeling of excitatory ending markers revealed an increase in the immunolabeled area, supporting abnormally elevated excitatory input numbers. Intrinsic firing properties were only slightly enhanced. In line with the disturbed development of LSO circuitry, auditory processing was also affected in adult Fmr1 KO mice as shown with single-unit recordings of LSO neurons. These processing deficits manifested as an increase in firing rate, a broadening of the frequency response area, and a shift in the interaural level difference function of LSO neurons. Our results suggest that this aberrant synaptic development of auditory brainstem circuits might be a major underlying cause of the auditory processing deficits in FXS. SIGNIFICANCE STATEMENT Fragile X Syndrome (FXS) is the most common inheritable form of intellectual impairment, including autism. A core symptom of FXS is extreme sensitivity to loud sounds. This is one reason why individuals with FXS tend to avoid social
Patricia Aparecida Zuanetti
Full Text Available CONTEXT AND OBJECTIVE: Malnutrition is one of the causes of changes in cell metabolism. The inner ear has few energy reserves and high metabolism. The aim of this study was to analyze whether malnutrition at an early age is related to impairment of auditory processing abilities and hearing abnormalities.DESIGN AND SETTING: Retrospective cohort study conducted in a tertiary public hospital.METHODS: 45 children participated, divided as follows: G1, children diagnosed with malnutrition in their first two years of life; G2, children without history of malnutrition but with learning difficulties; G3, children without history of malnutrition and without learning difficulties. Tympanometry, pure-tone audiometry and the Staggered Spondaic Word (SSW test (auditory processing were performed. Statistical inferences were made using the Kruskal-Wallis test (α = 5% and the test of equality of proportions between two samples (α = 1.7%.RESULTS: None of the 45 children participating in this study presented hearing deficiencies. However, at six of the eight frequencies analyzed, the children in G1 presented hearing thresholds lower than those of the other groups. In the auditory processing evaluation test, it was observed that 100% of the children in G1 presented abnormal auditory processing and that G1 and G2 had similar proportions of abnormalities (P-values: G1/G2 = 0.1; G1/G3 > 0.001; G2/G3 = 0.008.CONCLUSIONS: Malnutrition at an early age caused lowering of the hearing levels, although this impairment could not be considered to be a hearing deficiency. Every child in this group presented abnormalities in auditory processing abilities.
Vander Werff, Kathy R; Rieger, Brian
The primary aim of this study was to assess subcortical auditory processing in individuals with chronic symptoms after mild traumatic brain injury (mTBI) by measuring auditory brainstem responses (ABRs) to standard click and complex speech stimuli. Consistent with reports in the literature of auditory problems after mTBI (despite normal-hearing thresholds), it was hypothesized that individuals with mTBI would have evidence of impaired neural encoding in the auditory brainstem compared to noninjured controls, as evidenced by delayed latencies and reduced amplitudes of ABR components. We further hypothesized that the speech-evoked ABR would be more sensitive than the click-evoked ABR to group differences because of its complex nature, particularly when recorded in a background noise condition. Click- and speech-ABRs were collected in 32 individuals diagnosed with mTBI in the past 3 to 18 months. All mTBI participants were experiencing ongoing injury symptoms for which they were seeking rehabilitation through a brain injury rehabilitation management program. The same data were collected in a group of 32 age- and gender-matched controls with no history of head injury. ABRs were recorded in both left and right ears for all participants in all conditions. Speech-ABRs were collected in both quiet and in a background of continuous 20-talker babble ipsilateral noise. Peak latencies and amplitudes were compared between groups and across subgroups of mTBI participants categorized by their behavioral auditory test performance. Click-ABR results were not significantly different between the mTBI and control groups. However, when comparing the control group to only those mTBI subjects with measurably decreased performance on auditory behavioral tests, small differences emerged, including delayed latencies for waves I, III, and V. Similarly, few significant group differences were observed for peak amplitudes and latencies of the speech-ABR when comparing at the whole group level
Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D
To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.
Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon
The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.
Iliadou, Vassiliki; Kaprinis, Stergios; Kandylis, Dimitrios; Kaprinis, George St
One of the widely used tests to evaluate functional asymmetry of cerebral hemispheres is the dichotic listening test with the usually prevailing right ear advantage. The current study aims at assessing hemispheric laterality in an adult sample of individuals with dyslexia, with auditory processing disorder (APD), and adults experiencing comorbidity of the two mentioned disorders against a control group with normal hearing and absence of learning disabilities. Results exhibit a right hemispheric dominance for the control and APD group, a left hemispheric dominance for the group diagnosed with both dyslexia and APD, and absence of dominance for the dyslexia group. Assessment of laterality was repeatable and produced stable results, indicating a true deficit. A component of auditory processing, specifically the auditory performance in competing acoustic signals, seems to be deficient in all three groups, and laterality of hemispheric functions influenced at least for auditory-language stimuli in the two of the three groups, one being adults with dyslexia and the other being adults with comorbidity of dyslexia and APD.
Gherri, Elena; Driver, Jon; Eimer, Martin
To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.
Klinkenberg, Inge; Blokland, Arjan; Riedel, Wim J; Sambeth, Anke
Suppression of redundant auditory information and facilitation of deviant, novel, or salient sounds can be assessed with paired-click and oddball tasks, respectively. Electrophysiological correlates of perturbed auditory processing found in these paradigms are likely to be a trait marker or candidate endophenotype for schizophrenia. This is the first study to investigate the effects of the muscarinic M1 antagonist biperiden and the cholinesterase inhibitor rivastigmine on auditory-evoked potentials (AEPs), sensory gating, and mismatch negativity (MMN) in young, healthy volunteers. Biperiden increased P50 amplitude and prolonged N100 and P200 latency in the paired-click task but did not affect sensory gating. Rivastigmine was able to reverse the effects of biperiden on N100 and P200 latency. Biperiden increased P50 latency in the novelty oddball task, which was reversed by concurrent administration of rivastigmine. Rivastigmine shortened N100 latency and enhanced P3a amplitude in the novelty oddball paradigm, both of which were reversed by biperiden. The muscarinic M1 receptor appears to be involved in preattentive processing of auditory information in the paired-click task. Additional effects of biperiden versus rivastigmine were reversed by a combination treatment, which renders attribution of these findings to muscarinic M1 versus muscarinic M2-M5 or nicotinic receptors much more difficult. It remains to be seen whether the effects of cholinergic drugs on AEPs are specifically related to the abnormalities found in schizophrenia. Alternatively, aberrant auditory processing could also be indicative of a general disturbance in neural functioning shared by several neuropsychiatric disorders and/or neurodegenerative changes seen in aging.
Full Text Available BACKGROUND: Recognizing an object requires binding together several cues, which may be distributed across different sensory modalities, and ignoring competing information originating from other objects. In addition, knowledge of the semantic category of an object is fundamental to determine how we should react to it. Here we investigate the role of semantic categories in the processing of auditory-visual objects. METHODOLOGY/FINDINGS: We used an auditory-visual object-recognition task (go/no-go paradigm. We compared recognition times for two categories: a biologically relevant one (animals and a non-biologically relevant one (means of transport. Participants were asked to react as fast as possible to target objects, presented in the visual and/or the auditory modality, and to withhold their response for distractor objects. A first main finding was that, when participants were presented with unimodal or bimodal congruent stimuli (an image and a sound from the same object, similar reaction times were observed for all object categories. Thus, there was no advantage in the speed of recognition for biologically relevant compared to non-biologically relevant objects. A second finding was that, in the presence of a biologically relevant auditory distractor, the processing of a target object was slowed down, whether or not it was itself biologically relevant. It seems impossible to effectively ignore an animal sound, even when it is irrelevant to the task. CONCLUSIONS/SIGNIFICANCE: These results suggest a specific and mandatory processing of animal sounds, possibly due to phylogenetic memory and consistent with the idea that hearing is particularly efficient as an alerting sense. They also highlight the importance of taking into account the auditory modality when investigating the way object concepts of biologically relevant categories are stored and retrieved.
Chonchaiya, Weerasak; Tardif, Twila; Mai, Xiaoqin; Xu, Lin; Li, Mingyan; Kaciroti, Niko; Kileny, Paul R; Shao, Jie; Lozoff, Betsy
Auditory processing capabilities at the subcortical level have been hypothesized to impact an individual's development of both language and reading abilities. The present study examined whether auditory processing capabilities relate to language development in healthy 9-month-old infants. Participants were 71 infants (31 boys and 40 girls) with both Auditory Brainstem Response (ABR) and language assessments. At 6 weeks and/or 9 months of age, the infants underwent ABR testing using both a standard hearing screening protocol with 30 dB clicks and a second protocol using click pairs separated by 8, 16, and 64-ms intervals presented at 80 dB. We evaluated the effects of interval duration on ABR latency and amplitude elicited by the second click. At 9 months, language development was assessed via parent report on the Chinese Communicative Development Inventory - Putonghua version (CCDI-P). Wave V latency z-scores of the 64-ms condition at 6 weeks showed strong direct relationships with Wave V latency in the same condition at 9 months. More importantly, shorter Wave V latencies at 9 months showed strong relationships with the CCDI-P composite consisting of phrases understood, gestures, and words produced. Likewise, infants who had greater decreases in Wave V latencies from 6 weeks to 9 months had higher CCDI-P composite scores. Females had higher language development scores and shorter Wave V latencies at both ages than males. Interestingly, when the ABR Wave V latencies at both ages were taken into account, the direct effects of gender on language disappeared. In conclusion, these results support the importance of low-level auditory processing capabilities for early language acquisition in a population of typically developing young infants. Moreover, the auditory brainstem response in this paradigm shows promise as an electrophysiological marker to predict individual differences in language development in young children. © 2012 Blackwell Publishing Ltd.
Full Text Available Functional neuroimaging of covert perceptual and cognitive processes can inform the diagnoses and prognoses of patients with disorders of consciousness, such as the vegetative and minimally conscious states (VS;MCS. Here we report an event-related potential (ERP paradigm for detecting a hierarchy of auditory processes in a group of healthy individuals and patients with disorders of consciousness. Simple cortical responses to sounds were observed in all 16 patients; 7/16 (44% patients exhibited markers of the differential processing of speech and noise; and 1 patient produced evidence of the semantic processing of speech (i.e. the N400 effect. In several patients, the level of auditory processing that was evident from ERPs was higher than the abilities that were evident from behavioural assessment, indicating a greater sensitivity of ERPs in some cases. However, there were no differences in auditory processing between VS and MCS patient groups, indicating a lack of diagnostic specificity for this paradigm. Reliably detecting semantic processing by means of the N400 effect in passively listening single-subjects is a challenge. Multiple assessment methods are needed in order to fully characterise the abilities of patients with disorders of consciousness.
Full Text Available The auditory cortex is well known to be critical for music perception, including the perception of consonance and dissonance. Studies on the neural correlates of consonance and dissonance perception have largely employed non-invasive electrophysiological and functional imaging techniques in humans as well as neurophysiological recordings in animals, but the fine-grained spatiotemporal dynamics within the human auditory cortex remain unknown. We recorded electrocorticographic (ECoG signals directly from the lateral surface of either the left or right temporal lobe of 8 patients undergoing neurosurgical treatment as they passively listened to highly consonant and highly dissonant musical chords. We assessed ECoG activity in the high gamma (γhigh, 70-150 Hz frequency range within the superior temporal gyrus (STG and observed two types of cortical sites of interest in both hemispheres: one type showed no significant difference in γhigh activity between consonant and dissonant chords, and another type showed increased γhigh responses to dissonant chords between 75-200ms post-stimulus onset. Furthermore, a subset of these sites exhibited additional sensitivity towards different types of dissonant chords. We also observed a distinct spatial organization of cortical sites in the right STG, with dissonant-sensitive sites located anterior to non-sensitive sites. In sum, these findings demonstrate differential processing of consonance and dissonance in bilateral STG with the right hemisphere exhibiting robust and spatially organized sensitivity towards dissonance.
Martins, Mauricio Dias; Gingras, Bruno; Puig-Waldmueller, Estela; Fitch, W Tecumseh
The human ability to process hierarchical structures has been a longstanding research topic. However, the nature of the cognitive machinery underlying this faculty remains controversial. Recursion, the ability to embed structures within structures of the same kind, has been proposed as a key component of our ability to parse and generate complex hierarchies. Here, we investigated the cognitive representation of both recursive and iterative processes in the auditory domain. The experiment used a two-alternative forced-choice paradigm: participants were exposed to three-step processes in which pure-tone sequences were built either through recursive or iterative processes, and had to choose the correct completion. Foils were constructed according to generative processes that did not match the previous steps. Both musicians and non-musicians were able to represent recursion in the auditory domain, although musicians performed better. We also observed that general 'musical' aptitudes played a role in both recursion and iteration, although the influence of musical training was somehow independent from melodic memory. Moreover, unlike iteration, recursion in audition was well correlated with its non-auditory (recursive) analogues in the visual and action sequencing domains. These results suggest that the cognitive machinery involved in establishing recursive representations is domain-general, even though this machinery requires access to information resulting from domain-specific processes. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Rimol, Lars M; Specht, Karsten; Weis, Susanne; Savoy, Robert; Hugdahl, Kenneth
The objective of this study was to investigate phonological processing in the brain by using sub-syllabic speech units with rapidly changing frequency spectra. We used isolated stop consonants extracted from natural speech consonant-vowel (CV) syllables, which were digitized and presented through headphones in a functional magnetic resonance imaging (fMRI) paradigm. The stop consonants were contrasted with CV syllables. In order to control for general auditory activation, we used duration- and intensity-matched noise as a third stimulus category. The subjects were seventeen right-handed, healthy male volunteers. BOLD activation responses were acquired on a 1.5-T MR scanner. The auditory stimuli were presented through MR compatible headphones, using an fMRI paradigm with clustered volume acquisition and 12 s repetition time. The consonant vs. noise comparison resulted in unilateral left lateralized activation in the posterior part of the middle temporal gyrus and superior temporal sulcus (MTG/STS). The CV syllable vs. noise comparison resulted in bilateral activation in the same regions, with a leftward asymmetry. The reversed comparisons, i.e., noise vs. speech stimuli, resulted in right hemisphere activation in the supramarginal and superior temporal gyrus, as well as right prefrontal activation. Since the consonant stimuli are unlikely to have activated a semantic-lexical processing system, it seems reasonable to assume that the MTG/STS activation represents phonetic/phonological processing. This may involve the processing of both spectral and temporal features considered important for phonetic encoding.
Vilela, Nadia; Barrozo, Tatiane Faria; Pagan-Neves, Luciana de Oliveira; Sanches, Seisse Gabriela Gandolfi; Wertzner, Haydée Fiszbein; Carvallo, Renata Mota Mamede
To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) auditory processing evaluation results. When a (central) auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central) auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central) auditory processing disorder. The severity of speech-sound disorder in children was influenced by the presence of (central) auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
Full Text Available OBJECTIVE: To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central auditory processing disorder. METHODS: Language, audiological and (central auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central auditory processing evaluation results. RESULTS: When a (central auditory processing disorder was present in association with a speech disorder, the children tended to have lower scores on phonological assessments. A greater severity of speech disorder was related to a greater probability of the child having a (central auditory processing disorder. The use of a cutoff value for the Percentage of Consonants Correct-Revised index successfully distinguished between children with and without a (central auditory processing disorder. CONCLUSIONS: The severity of speech-sound disorder in children was influenced by the presence of (central auditory processing disorder. The attempt to identify a cutoff value based on a severity index was successful.
M. Jelicic (Marko)
textabstractThis dissertation aims to examine the possibility of cognitive processing and memory storage in anaesthesia. It consists of four parts. The first section provides a brief outline of unconscious mental processes in psychological research. Next, a review of the experimental studies of
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple single-peaked spectral tuning (grouped into two clusters), we observe that ∼60% of auditory populations are sensitive to multiple frequency bands. Specifically, we observe sensitivity to multiple frequency bands (1) at exactly one octave distance from each other, (2) at multiple harmonically related frequency intervals, and (3) with no apparent relationship to each other. We propose that beyond the well known cortical tonotopic organization, multipeaked spectral tuning amplifies selected combinations of frequency bands. Such selective amplification might serve to detect behaviorally relevant and complex sound features, aid in segregating auditory scenes, and explain prominent perceptual phenomena such as octave invariance.
Feng, Lei; Wang, Xiaoqin
Harmonicity is a fundamental element of music, speech, and animal vocalizations. How the auditory system extracts harmonic structures embedded in complex sounds and uses them to form a coherent unitary entity is not fully understood. Despite the prevalence of sounds rich in harmonic structures in our everyday hearing environment, it has remained largely unknown what neural mechanisms are used by the primate auditory cortex to extract these biologically important acoustic structures. In this study, we discovered a unique class of harmonic template neurons in the core region of auditory cortex of a highly vocal New World primate, the common marmoset (Callithrix jacchus), across the entire hearing frequency range. Marmosets have a rich vocal repertoire and a similar hearing range to that of humans. Responses of these neurons show nonlinear facilitation to harmonic complex sounds over inharmonic sounds, selectivity for particular harmonic structures beyond two-tone combinations, and sensitivity to harmonic number and spectral regularity. Our findings suggest that the harmonic template neurons in auditory cortex may play an important role in processing sounds with harmonic structures, such as animal vocalizations, human speech, and music.
Daliri, Ayoub; Max, Ludo
Stuttering is associated with atypical structural and functional connectivity in sensorimotor brain areas, in particular premotor, motor, and auditory regions. It remains unknown, however, which specific mechanisms of speech planning and execution are affected by these neurological abnormalities. To investigate pre-movement sensory modulation, we recorded 12 stuttering and 12 nonstuttering adults' auditory evoked potentials in response to probe tones presented prior to speech onset in a delayed-response speaking condition vs. no-speaking control conditions (silent reading; seeing nonlinguistic symbols). Findings indicate that, during speech movement planning, the nonstuttering group showed a statistically significant modulation of auditory processing (reduced N1 amplitude) that was not observed in the stuttering group. Thus, the obtained results provide electrophysiological evidence in support of the hypothesis that stuttering is associated with deficiencies in modulating the cortical auditory system during speech movement planning. This specific sensorimotor integration deficiency may contribute to inefficient feedback monitoring and, consequently, speech dysfluencies. Copyright © 2015 Elsevier Inc. All rights reserved.
Full Text Available Background: The left superior temporal gyrus (STG has been suggested to play a key role in auditory verbal hallucinations in patients with schizophrenia. Methods: Eleven medicated subjects with schizophrenia and medication-resistant auditory verbal hallucinations and 19 healthy controls underwent perfusion magnetic resonance imaging with arterial spin labeling. Three additional repeated measurements were conducted in the patients. Patients underwent a treatment with transcranial magnetic stimulation (TMS between the first 2 measurements. The main outcome measure was the pooled cerebral blood flow (CBF, which consisted of the regional CBF measurement in the left superior temporal gyrus (STG and the global CBF measurement in the whole brain.Results: Regional CBF in the left STG in patients was significantly higher compared to controls (p < 0.0001 and to the global CBF in patients (p < 0.004 at baseline. Regional CBF in the left STG remained significantly increased compared to the global CBF in patients across time (p < 0.0007, and it remained increased in patients after TMS compared to the baseline CBF in controls (p < 0.0001. After TMS, PANSS (p = 0.003 and PSYRATS (p = 0.01 scores decreased significantly in patients.Conclusions: This study demonstrated tonically increased regional CBF in the left STG in patients with schizophrenia and auditory hallucinations despite a decrease in symptoms after TMS. These findings were consistent with what has previously been termed a trait marker of auditory verbal hallucinations in schizophrenia.
Drake, C; Bertrand, D
Music perception and performance rely heavily on temporal processing: for instance, each event must be situated in time in relation to surrounding events, and events must be grouped together in order to overcome memory constraints. The temporal structure of music varies considerably from one culture to another, and so it has often been supposed that the specific implementation of perceptual and cognitive temporal processes will differ as a function of an individual's cultural exposure and experience. In this paper we examine the alternative position that some temporal processes may be universal, in the sense that they function in a similar manner irrespective of an individual's cultural exposure and experience. We first review rhythm perception and production studies carried out with adult musicians, adult nonmusicians, children, and infants in order to identify temporal processes that appear to function in a similar fashion irrespective of age, acculturation, and musical training. This review leads to the identification of five temporal processes that we submit as candidates for the status of "temporal universals." For each process, we select the simplest and most representative experimental paradigm that has been used to date. This leads to a research proposal for future intercultural studies that could test the universal nature of these processes.
Miller, Tova; Chen, Sufen; Lee, Wei Wei; Sussman, Elyse S
ERPs and behavioral responses were measured to assess how task-irrelevant sounds interact with task processing demands and affect the ability to monitor and track multiple sound events. Participants listened to four-tone sequential frequency patterns, and responded to frequency pattern deviants (reversals of the pattern). Irrelevant tone feature patterns (duration and intensity) and respective pattern deviants were presented together with frequency patterns and frequency pattern deviants in separate conditions. Responses to task-relevant and task-irrelevant feature pattern deviants were used to test processing demands for irrelevant sound input. Behavioral performance was significantly better when there were no distracting feature patterns. Errors primarily occurred in response to the to-be-ignored feature pattern deviants. Task-irrelevant elicitation of ERP components was consistent with the error analysis, indicating a level of processing for the irrelevant features. Task-relevant elicitation of ERP components was consistent with behavioral performance, demonstrating a "cost" of performance when there were two feature patterns presented simultaneously. These results provide evidence that the brain tracked the irrelevant duration and intensity feature patterns, affecting behavioral performance. Overall, our results demonstrate that irrelevant informational streams are processed at a cost, which may be considered a type of multitasking that is an ongoing, automatic processing of task-irrelevant sensory events. © 2015 Society for Psychophysiological Research.
Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.
Wong, Eddie; Yang, Bin; Du, Lida; Ho, Wai Hong; Lau, Condon; Ke, Ya; Chan, Ying Shing; Yung, Wing Ho; Wu, Ed X
During hypoxia, the tissues do not obtain adequate oxygen. Chronic hypoxia can lead to many health problems. A relatively common cause of chronic hypoxia is sleep apnea. Sleep apnea is a sleep breathing disorder that affects 3-7% of the population. During sleep, the patient's breathing starts and stops. This can lead to hypertension, attention deficits, and hearing disorders. In this study, we apply an established chronic intermittent hypoxemia (CIH) model of sleep apnea to study its impact on auditory processing. Adult rats were reared for seven days during sleeping hours in a gas chamber with oxygen level cycled between 10% and 21% (normal atmosphere) every 90s. During awake hours, the subjects were housed in standard conditions with normal atmosphere. CIH treatment significantly reduces arterial oxygen partial pressure and oxygen saturation during sleeping hours (relative to controls). After treatment, subjects underwent functional magnetic resonance imaging (fMRI) with broadband sound stimulation. Responses are observed in major auditory centers in all subjects, including the auditory cortex (AC) and auditory midbrain. fMRI signals from the AC are statistically significantly increased after CIH by 0.13% in the contralateral hemisphere and 0.10% in the ipsilateral hemisphere. In contrast, signals from the lateral lemniscus of the midbrain are significantly reduced by 0.39%. Signals from the neighboring inferior colliculus of the midbrain are relatively unaffected. Chronic hypoxia affects multiple levels of the auditory system and these changes are likely related to hearing disorders associated with sleep apnea. Copyright © 2017 Elsevier Inc. All rights reserved.
van Lieshout, Maria Nicolette Margaretha
This paper is concerned with combined inference for point processes on the real line observed in a broken interval. For such processes, the classic history-based approach cannot be used. Instead, we adapt tools from sequential spatial point processes. For a range of models, the marginal and
Full Text Available Indexing and query processing is an emerging research field in spatio - temporal data. Most of the real-time applications such as location based services, fleet management, traffic prediction and radio frequency identification and sensor networks are based on spatiotemporal indexing and query processing. All the indexing and query processing applications is any one of the forms, such as spatio index access and supporting queries or spatio-temporal indexing method and support query or temporal dimension, while in spatial data it is considered as the second priority. In this paper, give the survey of the various uncertain indexing and query processing techniques. Most of the existing survey works on spatio-temporal are based on indexing methods and query processing, but presented separately. Both the indexing and querying are related, hence state - of - art of both the indexing and query processing techniques are considered together. This paper gives the details of spatio-temporal data classification, various types of indexing methods, query processing, application areas and research direction of spatio-temporal indexing and query processing.
Oxenham, Andrew J.; Dau, Torsten
, 1958–1965 (1985)]. Søren explained this surprising result in terms of the spread of masker excitation and across-channel processing of envelope fluctuations. A later study [S. Buus and C. Pan, J. Acoust. Soc. Am. 96, 1445–1457 (1994)] pioneered the use of the same stimuli in tasks where across...
Murphy-Ruiz, Paulina C; Peñaloza-López, Yolanda R; García-Pedroza, Felipe; Poblano, Adrián
We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD), we can detect dysfunction using specific tests. We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS). Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere's contribution, we utilized tests to measure alterations in central auditory processing (CAP), such as determination of frequency patterns; sound duration; music pitch recognition; and identification of environmental sounds. We compared results among the two groups. Children with DD showed lower performance than CS in all CAP subtests, including those that preferentially engaged the cerebral right hemisphere. Our data suggests a significant contribution of the right hemisphere in alterations of CAP in children with DD. Thus, right hemisphere CAP must be considered for examination and rehabilitation of children with DD.
Singh, Nandini C.; Theunissen, Frédéric E.
The modulation statistics of natural sound ensembles were analyzed by calculating the probability distributions of the amplitude envelope of the sounds and their time-frequency correlations given by the modulation spectra. These modulation spectra were obtained by calculating the two-dimensional Fourier transform of the autocorrelation matrix of the sound stimulus in its spectrographic representation. Since temporal bandwidth and spectral bandwidth are conjugate variables, it is shown that the joint modulation spectrum of sound occupies a restricted space: sounds cannot have rapid temporal and spectral modulations simultaneously. Within this restricted space, it is shown that natural sounds have a characteristic signature. Natural sounds, in general, are low-passed, showing most of their modulation energy for low temporal and spectral modulations. Animal vocalizations and human speech are further characterized by the fact that most of the spectral modulation power is found only for low temporal modulation. Similarly, the distribution of the amplitude envelopes also exhibits characteristic shapes for natural sounds, reflecting the high probability of epochs with no sound, systematic differences across frequencies, and a relatively uniform distribution for the log of the amplitudes for vocalizations. It is postulated that the auditory system as well as engineering applications may exploit these statistical properties to obtain an efficient representation of behaviorally relevant sounds. To test such a hypothesis we show how to create synthetic sounds with first and second order envelope statistics identical to those found in natural sounds.
Muller-Gass, Alexandra; Macdonald, Margaret; Schröger, Erich; Sculthorpe, Lauren; Campbell, Kenneth
The P3a is an event-related potential (ERP) component believed to reflect an attention-switch to task-irrelevant stimuli or stimulus information. The present study concerns the automaticity of the processes underlying the auditory P3a. More specifically, we investigated whether the auditory P3a is an attention-independent component, that is, whether it can still be elicited under highly-focused selective attention to a different (visual) channel. Furthermore, we examined whether the auditory P3a can be modulated by the demands of the visual diversion task. Subjects performed a continuous visual tracking task that varied in difficulty, based on the number of objects to-be-tracked. Task-irrelevant auditory stimuli were presented at very rapid and random rates concurrently to the visual task. The auditory sequence included rare increments (+10 dB) and decrements (-20 dB) in intensity relative to the frequently-presented standard stimulus. Importantly, the auditory deviant stimuli elicited a significant P3a during the most difficult visual task, when conditions were optimised to prevent attentional slippage to the auditory channel. This finding suggests that the elicitation of the auditory P3a does not require available central capacity, and confirms the automatic nature of the processes underlying this ERP component. Moreover, the difficulty of the visual task did not modulate either the mismatch negativity (MMN) or the P3a but did have an effect on a late (350-400 ms) negativity, an ERP deflection perhaps related to a subsequent evaluation of the auditory change. Together, these results imply that the auditory P3a could reflect a strongly-automatic process, one that does not require and is not modulated by attention.
Bellis, Teri James; Billiet, Cassie; Ross, Jody
Cacace and McFarland (2005) have suggested that the addition of cross-modal analogs will improve the diagnostic specificity of (C)APD (central auditory processing disorder) by ensuring that deficits observed are due to the auditory nature of the stimulus and not to supra-modal or other confounds. Others (e.g., Musiek et al, 2005) have expressed concern about the use of such analogs in diagnosing (C)APD given the uncertainty as to the degree to which cross-modal measures truly are analogous and emphasize the nonmodularity of the CANs (central auditory nervous system) and its function, which precludes modality specificity of (C)APD. To date, no studies have examined the clinical utility of cross-modal (e.g., visual) analogs of central auditory tests in the differential diagnosis of (C)APD. This study investigated performance of children diagnosed with (C)APD, children diagnosed with ADHD (attention deficit hyperactivity disorder), and typically developing children on three diagnostic tests of central auditory function and their corresponding visual analogs. The study sought to determine whether deficits observed in the (C)APD group were restricted to the auditory modality and the degree to which the addition of visual analogs aids in the ability to differentiate among groups. An experimental repeated measures design was employed. Participants consisted of three groups of right-handed children (normal control, n=10; ADHD, n=10; (C)APD, n=7) with normal and symmetrical hearing sensitivity, normal or corrected-to-normal visual acuity, and no family or personal history of disorders unrelated to their primary diagnosis. Participants in Groups 2 and 3 met current diagnostic criteria for ADHD and (C)APD. Visual analogs of three tests in common clinical use for the diagnosis of (C)APD were used (Dichotic Digits [Musiek, 1983]; Frequency Patterns [Pinheiro and Ptacek, 1971]; and Duration Patterns [Pinheiro and Musiek, 1985]). Participants underwent two 1 hr test sessions
M. Mc Laughlin (Myles); J.N. Chabwine; M. van der Heijden (Marcel); P.X. Joris (Philip)
textabstractTo localize low-frequency sounds, humans rely on an interaural comparison of the temporally encoded sound waveform after peripheral filtering. This process can be compared with cross-correlation. For a broadband stimulus, after filtering, the correlation function has a damped oscillatory
Maria Luisa eLorusso; Chiara eCantiani; Massimo eMolteni
The nature of Rapid Auditory Processing (RAP) deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of non-verbal tone sequences. Participants were 46 childre...
Paulina C. Murphy-Ruiz; Yolanda R. Penaloza-Lopez; Felipe Garcia-Pedroza; Adrian Poblano
Objective We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD), we can detect dysfunction using specific tests. Method We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS). Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere’s contribution, we utilized te...
Travis White-Schwoch; Kali Woodruff Carr; Thompson, Elaine C.; Samira Anderson; Trent Nicol; Bradlow, Ann R.; Zecker, Steven G.; Nina Kraus
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to t...
White-Schwoch, Travis; Woodruff Carr, Kali; Thompson, Elaine C; Anderson, Samira; Nicol, Trent; Bradlow, Ann R; Zecker, Steven G; Kraus, Nina
Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y), we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Full Text Available Learning to read is a fundamental developmental milestone, and achieving reading competency has lifelong consequences. Although literacy development proceeds smoothly for many children, a subset struggle with this learning process, creating a need to identify reliable biomarkers of a child's future literacy that could facilitate early diagnosis and access to crucial early interventions. Neural markers of reading skills have been identified in school-aged children and adults; many pertain to the precision of information processing in noise, but it is unknown whether these markers are present in pre-reading children. Here, in a series of experiments in 112 children (ages 3-14 y, we show brain-behavior relationships between the integrity of the neural coding of speech in noise and phonology. We harness these findings into a predictive model of preliteracy, revealing that a 30-min neurophysiological assessment predicts performance on multiple pre-reading tests and, one year later, predicts preschoolers' performance across multiple domains of emergent literacy. This same neural coding model predicts literacy and diagnosis of a learning disability in school-aged children. These findings offer new insight into the biological constraints on preliteracy during early childhood, suggesting that neural processing of consonants in noise is fundamental for language and reading development. Pragmatically, these findings open doors to early identification of children at risk for language learning problems; this early identification may in turn facilitate access to early interventions that could prevent a life spent struggling to read.
Lewis, James W.; Talkington, William J.; Tallaksen, Katherine C.; Frum, Chris A.
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and “auditory objects” can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more “object-like,” independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds—a quantitative measure of change in entropy of the acoustic signals over time—and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages
Lewis, James W; Talkington, William J; Tallaksen, Katherine C; Frum, Chris A
Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the
Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence...... with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... that are binaurally linked can utilize the signals at both ears and preserve the ILD’s through co-ordinated compression. Hearing-impaired listeners received a small, but not significant advantage from linked compared to independent compression. It was concluded that, for speech intelligibility, the exact ILD...
Chang, Yi-Shin; Gratiot, Mathilde; Owen, Julia P; Brandes-Aitken, Anne; Desai, Shivani S; Hill, Susanna S; Arnett, Anne B; Harris, Julia; Marco, Elysa J; Mukherjee, Pratik
Sensory processing disorders (SPDs) affect up to 16% of school-aged children, and contribute to cognitive and behavioral deficits impacting affected individuals and their families. While sensory processing differences are now widely recognized in children with autism, children with sensory-based dysfunction who do not meet autism criteria based on social communication deficits remain virtually unstudied. In a previous pilot diffusion tensor imaging (DTI) study, we demonstrated that boys with SPD have altered white matter microstructure primarily affecting the posterior cerebral tracts, which subserve sensory processing and integration. This disrupted microstructural integrity, measured as reduced white matter fractional anisotropy (FA), correlated with parent report measures of atypical sensory behavior. In this present study, we investigate white matter microstructure as it relates to tactile and auditory function in depth with a larger, mixed-gender cohort of children 8-12 years of age. We continue to find robust alterations of posterior white matter microstructure in children with SPD relative to typically developing children (TDC), along with more spatially distributed alterations. We find strong correlations of FA with both parent report and direct measures of tactile and auditory processing across children, with the direct assessment measures of tactile and auditory processing showing a stronger and more continuous mapping to the underlying white matter integrity than the corresponding parent report measures. Based on these findings of microstructure as a neural correlate of sensory processing ability, diffusion MRI merits further investigation as a tool to find biomarkers for diagnosis, prognosis and treatment response in children with SPD. To our knowledge, this work is the first to demonstrate associations of directly measured tactile and non-linguistic auditory function with white matter microstructural integrity - not just in children with SPD, but also
Yi Shin Chang
Full Text Available Sensory processing disorders (SPD affect up to 16% of school-aged children, and contribute to cognitive and behavioral deficits impacting affected individuals and their families. While sensory processing differences are now widely recognized in children with autism, children with sensory-based dysfunction who do not meet autism criteria based on social communication deficits remain virtually unstudied. In a previous pilot diffusion tensor imaging (DTI study, we demonstrated that boys with SPD have altered white matter microstructure primarily affecting the posterior cerebral tracts, which subserve sensory processing and integration. This disrupted microstructural integrity, measured as reduced white matter fractional anisotropy (FA, correlated with parent report measures of atypical sensory behavior. In this present study, we investigate white matter microstructure as it relates to tactile and auditory function in depth with a larger, mixed-gender cohort of children 8 to 12 years of age. We continue to find robust alterations of posterior white matter microstructure in children with SPD relative to typically developing children, along with more spatially distributed alterations. We find strong correlations of FA with both parent report and direct measures of tactile and auditory processing across children, with the direct assessment measures of tactile and auditory processing showing a stronger and more continuous mapping to the underlying white matter integrity than the corresponding parent report measures. Based on these findings of microstructure as a neural correlate of sensory processing ability, diffusion MRI merits further investigation as a tool to find biomarkers for diagnosis, prognosis and treatment response in children with SPD. To our knowledge, this work is the first to demonstrate associations of directly measured tactile and non-linguistic auditory function with white matter microstructural integrity -- not just in children with
Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. Copyright © 2014 Elsevier Inc. All rights reserved.
Zeamer, Charlotte; Fox Tree, Jean E
Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for verbatim language. But impaired recall effects were also found with a variety of nonlinguistic noises, suggesting that neither type of noise nor amplitude and duration of noise are adequate predictors of distraction. Rather, distraction occurred when it was difficult for a listener to process sounds and assemble coherent, differentiable streams of input, one task-salient and attended and the other task-irrelevant and inhibited. In 3 experiments, the effects of auditory distractors during a short spoken lecture were tested. Participants recalled details of the lecture and also reported their opinions of the sound quality. Our findings suggest that distractors that are difficult to designate as either task related or environment related (and therefore irrelevant) draw cognitive processing resources away from a target speech stream during a listening task, impairing recall. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Full Text Available Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ, socioeconomic status (SES and musical experience in predicting frequency discrimination thresholds on each task were assessed using multiple regression analyses. The 2I_6A_X task was more cognitively demanding and hence more susceptible to differences specifically in SES and musical training. Performance on this task did not, however, relate to nonword repetition ability (a measure of language learning capacity. The 3I_2AFC task, by contrast, was only susceptible to musical training. Moreover, thresholds measured using it predicted some variance in nonword repetition performance. This design thus seems suitable for use in studies addressing questions regarding the role of auditory processing in supporting language and literacy development.
Xue, Jin; Yang, Jie; Zhao, Qian
The conceptual projection of time onto the domain of space constitutes one of the most challenging issues in the cognitive embodied theories. In Chinese, spatial order (e.g.,/da shu qian/, in front of a tree) shares the same terms with temporal sequence (", /san yue qian/, before March). In comparison, English natives use different sets of prepositions to describe spatial and temporal relationship, i.e., "before" to express temporal sequencing and "in front of" to express spatial order. The linguistic variations regarding the specific lexical encodings indicate that some flexibility might be available in how space-time parallelisms are formulated across different languages. In the present study, ERP (Event-related potentials) data were collected when Chinese-English bilinguals processed temporal ordering and spatial sequencing in both their first language (L1) Chinese (Experiment 1) and the second language (L2) English (Experiment 2). It was found that, despite the different lexical encodings, early sensorimotor simulation plays a role in temporal sequencing processing in both L1 Chinese and L2 English. The findings well support the embodied theory that conceptual knowledge is grounded in sensory-motor systems (Gallese and Lakoff, Cogn Neuropsychol 22:455-479, 2005). Additionally, in both languages, neural representations during comprehending temporal sequencing and spatial ordering are different. The time-spatial relationship is asymmetric, in that space schema could be imported into temporal sequence processing but not vice versa. These findings support the weak view of the Metaphoric Mapping Theory.
Nadia Vilela; Tatiane Faria Barrozo; Luciana de Oliveira Pagan-Neves; Seisse Gabriela Gandolfi Sanches; Haydée Fiszbein Wertzner; Renata Mota Mamede Carvallo
OBJECTIVE: To identify a cutoff value based on the Percentage of Consonants Correct-Revised index that could indicate the likelihood of a child with a speech-sound disorder also having a (central) auditory processing disorder . METHODS: Language, audiological and (central) auditory processing evaluations were administered. The participants were 27 subjects with speech-sound disorders aged 7 to 10 years and 11 months who were divided into two different groups according to their (central) audi...
Moerel, Michelle; De Martino, Federico; Formisano, Elia
Auditory cortical processing of complex meaningful sounds entails the transformation of sensory (tonotopic) representations of incoming acoustic waveforms into higher-level sound representations (e.g., their category). However, the precise neural mechanisms enabling such transformations remain largely unknown. In the present study, we use functional magnetic resonance imaging (fMRI) and natural sounds stimulation to examine these two levels of sound representation (and their relation) in the human auditory cortex. In a first experiment, we derive cortical maps of frequency preference (tonotopy) and selectivity (tuning width) by mathematical modeling of fMRI responses to natural sounds. The tuning width maps highlight a region of narrow tuning that follows the main axis of Heschl's gyrus and is flanked by regions of broader tuning. The narrowly tuned portion on Heschl's gyrus contains two mirror-symmetric frequency gradients, presumably defining two distinct primary auditory areas. In addition, our analysis indicates that spectral preference and selectivity (and their topographical organization) extend well beyond the primary regions and also cover higher-order and category-selective auditory regions. In particular, regions with preferential responses to human voice and speech occupy the low-frequency portions of the tonotopic map. We confirm this observation in a second experiment, where we find that speech/voice selective regions exhibit a response bias toward the low frequencies characteristic of human voice and speech, even when responding to simple tones. We propose that this frequency bias reflects the selective amplification of relevant and category-characteristic spectral bands, a useful processing step for transforming a sensory (tonotopic) sound image into higher level neural representations.
(Central) Auditory Processing Disorders (C)APD are becoming ever more diagnosed in children, though there is no agreement on diagnostic markers (gold standard for (C)APD diagnosis). In Germany, the diagnostics of (C)APD in the paediatric population is based on test measurements including phonological processing rather than on a valid theoretical model to guide clinicians. The evaluation of the clinical significance of central auditory functions as well as the number of the behavioural tests which should be performed are left to the diagnostician. The present study reviewed retrospectively test scores from a health care research database containing 167 children suspected of having a (C)APD. A total of 51 children participated in the study: 39 children identified with monosymptomatic (C)APD (on the basis of commonly used (C)APD tests with scores > or = 2 SDs below the mean on at least 2 tests) and 12 children who did not receive a (C)APD diagnosis (non-(C)APD). A stepwise discriminant analysis was performed with the five phonological measures of the psychological (C)APD-diagnostics in the German language: Nonword repetition by the Mottier-Test; the subtest "Recall of sentences" by the Heidelberger Sprachentwicklungstest for Language Development; "Digit Recall" by the German version of the K-ABC-subtest; "Auditory Closure" and "Sound Blending" by the subtests of the German version of the Illinois Test of Psycholinguistic Abilities. Next the discriminant function of the model was examined. Performance in the normed tests (K-ABC Digit Recall: T-score 44.2, p = 0.0029; Sentence Recall: T-score 42.4, p = 0.0002; Auditory Closure: T-score 49.9, p = 0.0130; Sound Blending: T-score 47.2 p = 0.0121) and in nonword repetition (Mottier: 15.9 raw scores, p diagnostic instruments.
Rubinstein, Jay T.; Shea-Brown, Eric
Model-based studies of responses of auditory nerve fibers to electrical stimulation can provide insight into the functioning of cochlear implants. Ideally, these studies can identify limitations in sound processing strategies and lead to improved methods for providing sound information to cochlear implant users. To accomplish this, models must accurately describe spiking activity while avoiding excessive complexity that would preclude large-scale simulations of populations of auditory nerve fibers and obscure insight into the mechanisms that influence neural encoding of sound information. In this spirit, we develop a point process model of individual auditory nerve fibers that provides a compact and accurate description of neural responses to electric stimulation. Inspired by the framework of generalized linear models, the proposed model consists of a cascade of linear and nonlinear stages. We show how each of these stages can be associated with biophysical mechanisms and related to models of neuronal dynamics. Moreover, we derive a semianalytical procedure that uniquely determines each parameter in the model on the basis of fundamental statistics from recordings of single fiber responses to electric stimulation, including threshold, relative spread, jitter, and chronaxie. The model also accounts for refractory and summation effects that influence the responses of auditory nerve fibers to high pulse rate stimulation. Throughout, we compare model predictions to published physiological data of response to high and low pulse rate stimulation. We find that the model, although constructed to fit data from single and paired pulse experiments, can accurately predict responses to unmodulated and modulated pulse train stimuli. We close by performing an ideal observer analysis of simulated spike trains in response to sinusoidally amplitude modulated stimuli and find that carrier pulse rate does not affect modulation detection thresholds. PMID:22673331
Full Text Available In the present study we determined the performance interrelations of ten different tasks that involved the processing of temporal intervals in the subsecond range, using multidimensional analyses. Twenty human subjects executed the following explicit timing tasks: interval categorization and discrimination (perceptual tasks, and single and multiple interval tapping (production tasks. In addition, the subjects performed a continuous circle-drawing task that has been considered an implicit timing paradigm, since time is an emergent property of the produced spatial trajectory. All tasks could be also classified as single or multiple interval paradigms. Auditory or visual markers were used to define the intervals. Performance variability, a measure that reflects the temporal and non-temporal processes for each task, was used to construct a dissimilarity matrix that quantifies the distances between pairs of tasks. Hierarchical clustering and multidimensional scaling were carried out on the dissimilarity matrix, and the results showed a prominent segregation of explicit and implicit timing tasks, and a clear grouping between single and multiple interval paradigms. In contrast, other variables such as the marker modality were not as crucial to explain the performance between tasks. Thus, using this methodology we revealed a probable functional arrangement of neural systems engaged during different timing behaviors.
Merchant, Hugo; Zarco, Wilbert; Bartolo, Ramon; Prado, Luis
In the present study we determined the performance interrelations of ten different tasks that involved the processing of temporal intervals in the subsecond range, using multidimensional analyses. Twenty human subjects executed the following explicit timing tasks: interval categorization and discrimination (perceptual tasks), and single and multiple interval tapping (production tasks). In addition, the subjects performed a continuous circle-drawing task that has been considered an implicit timing paradigm, since time is an emergent property of the produced spatial trajectory. All tasks could be also classified as single or multiple interval paradigms. Auditory or visual markers were used to define the intervals. Performance variability, a measure that reflects the temporal and non-temporal processes for each task, was used to construct a dissimilarity matrix that quantifies the distances between pairs of tasks. Hierarchical clustering and multidimensional scaling were carried out on the dissimilarity matrix, and the results showed a prominent segregation of explicit and implicit timing tasks, and a clear grouping between single and multiple interval paradigms. In contrast, other variables such as the marker modality were not as crucial to explain the performance between tasks. Thus, using this methodology we revealed a probable functional arrangement of neural systems engaged during different timing behaviors.
Full Text Available OBJETIVO: Esclarecer a relação entre dificuldades de aprendizagem e o transtorno do processamento auditivo em uma turma de segunda série. MÉTODOS: Através da aplicação de testes de leitura os alunos foram classificados quanto à fluência em leitura, sendo um com maior fluência (grupo A e outro com menor fluência (grupo B. Os testes de processamento auditivo foram comparados entre os grupos. RESULTADOS: Todos os participantes apresentaram dificuldades de aprendizagem e transtorno do processamento auditivo em quase todos os subperfis primários. Verificou-se que a variável memória sequencial verbal do grupo de menor fluência em leitura (grupo B foi significantemente melhor (p=0,030. CONCLUSÃO: Questiona-se o diagnóstico de transtorno primário do processamento auditivo e salienta-se a importância da memória sequencial verbal no aprendizado da leitura e escrita. Em face do que foi observado, mais pesquisas deverão ser realizadas objetivando o estudo dessa variável e sua relação com o processamento auditivo temporal.PURPOSE: To clarify the relationship between learning difficulties and auditory processing disorder in second grade students. METHODS: Based on the application of reading tests, the students of a second grade class of an elementary school were classified into two groups, according to their reading fluency: a group with better fluency (group A and another with less fluency (group B. A between-group analysis of the auditory processing tests was carried out. RESULTS: All participants presented learning difficulties and auditory processing disorder in almost every primary subprofiles. It was observed that the verbal sequential memory abilities of the less fluent group (group B was significantly better (p=0,030. CONCLUSION: The diagnosis of primary auditory processing disorder is questioned, and it is emphasized the importance of stimulating verbal sequential memory to the learning of reading and writing abilities. In
Teschner, Magnus J; Seybold, Bryan A; Malone, Brian J; Hüning, Jana; Schreiner, Christoph E
The neural mechanisms that support the robust processing of acoustic signals in the presence of background noise in the auditory system remain largely unresolved. Psychophysical experiments have shown that signal detection is influenced by the signal-to-noise ratio (SNR) and the overall stimulus level, but this relationship has not been fully characterized. We evaluated the neural representation of frequency in rat primary auditory cortex by constructing tonal frequency response areas (FRAs) in primary auditory cortex for different SNRs, tone levels, and noise levels. We show that response strength and selectivity for frequency and sound level depend on interactions between SNRs and tone levels. At low SNRs, jointly increasing the tone and noise levels reduced firing rates and narrowed FRA bandwidths; at higher SNRs, however, increasing the tone and noise levels increased firing rates and expanded bandwidths, as is usually seen for FRAs obtained without background noise. These changes in frequency and intensity tuning decreased tone level and tone frequency discriminability at low SNRs. By contrast, neither response onset latencies nor noise-driven steady-state firing rates meaningfully interacted with SNRs or overall sound levels. Speech detection performance in humans was also shown to depend on the interaction between overall sound level and SNR. Together, these results indicate that signal processing difficulties imposed by high noise levels are quite general and suggest that the neurophysiological changes we see for simple sounds generalize to more complex stimuli. Effective processing of sounds in background noise is an important feature of the mammalian auditory system and a necessary feature for successful hearing in many listening conditions. Even mild hearing loss strongly affects this ability in humans, seriously degrading the ability to communicate. The mechanisms involved in achieving high performance in background noise are not well understood. We
Soyman, Efe; Vicario, David S
Sensory and motor brain structures work in collaboration during perception. To evaluate their respective contributions, the present study recorded neural responses to auditory stimulation at multiple sites simultaneously in both the higher-order auditory area NCM and the premotor area HVC of the songbird brain in awake zebra finches (Taeniopygia guttata). Bird's own song (BOS) and various conspecific songs (CON) were presented in both blocked and shuffled sequences. Neural responses showed plasticity in the form of stimulus-specific adaptation, with markedly different dynamics between the two structures. In NCM, the response decrease with repetition of each stimulus was gradual and long-lasting and did not differ between the stimuli or the stimulus presentation sequences. In contrast, HVC responses to CON stimuli decreased much more rapidly in the blocked than in the shuffled sequence. Furthermore, this decrease was more transient in HVC than in NCM, as shown by differential dynamics in the shuffled sequence. Responses to BOS in HVC decreased more gradually than to CON stimuli. The quality of neural representations, computed as the mutual information between stimuli and neural activity, was higher in NCM than in HVC. Conversely, internal functional correlations, estimated as the coherence between recording sites, were greater in HVC than in NCM. The cross-coherence between the two structures was weak and limited to low frequencies. These findings suggest that auditory communication signals are processed according to very different but complementary principles in NCM and HVC, a contrast that may inform study of the auditory and motor pathways for human speech processing.NEW & NOTEWORTHY Neural responses to auditory stimulation in sensory area NCM and premotor area HVC of the songbird forebrain show plasticity in the form of stimulus-specific adaptation with markedly different dynamics. These two structures also differ in stimulus representations and internal
Ghosh, Prasanta Kumar; Goldstein, Louis M; Narayanan, Shrikanth S
Understanding how the human speech production system is related to the human auditory system has been a perennial subject of inquiry. To investigate the production-perception link, in this paper, a computational analysis has been performed using the articulatory movement data obtained during speech production with concurrently recorded acoustic speech signals from multiple subjects in three different languages: English, Cantonese, and Georgian. The form of articulatory gestures during speech production varies across languages, and this variation is considered to be reflected in the articulatory position and kinematics. The auditory processing of the acoustic speech signal is modeled by a parametric representation of the cochlear filterbank which allows for realizing various candidate filterbank structures by changing the parameter value. Using mathematical communication theory, it is found that the uncertainty about the articulatory gestures in each language is maximally reduced when the acoustic speech signal is represented using the output of a filterbank similar to the empirically established cochlear filterbank in the human auditory system. Possible interpretations of this finding are discussed. © 2011 Acoustical Society of America
Habibi, Assal; Cahn, B Rael; Damasio, Antonio; Damasio, Hanna
Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6-7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Lagacé, Josée; Jutras, Benoît; Gagné, Jean-Pierre
A hallmark listening problem of individuals presenting with auditory processing disorder (APD) is their poor recognition of speech in noise. The underlying perceptual problem of the listening difficulties in unfavorable listening conditions is unknown. The objective of this article was to demonstrate theoretically how to determine whether the speech recognition problems are related to an auditory dysfunction, a language-based dysfunction, or a combination of both. Tests such as the Speech Perception in Noise (SPIN) test allow the exploration of the auditory and language-based functions involved in speech perception in noise, which is not possible with most other speech-in-noise tests. Psychometric functions illustrating results from hypothetical groups of individuals with APD on the SPIN test are presented. This approach makes it possible to postulate about the origin of the speech perception problems in noise. APD is a complex and heterogeneous disorder for which the underlying deficit is currently unclear. Because of their design, SPIN-like tests can potentially be used to identify the nature of the deficits underlying problems with speech perception in noise for this population. A better understanding of the difficulties with speech perception in noise experienced by many listeners with APD should lead to more efficient intervention programs.
Full Text Available Several studies comparing adult musicians and non-musicians have shown that music training is associated with brain differences. It is unknown, however, whether these differences result from lengthy musical training, from pre-existing biological traits, or from social factors favoring musicality. As part of an ongoing 5-year longitudinal study, we investigated the effects of a music training program on the auditory development of children, over the course of two years, beginning at age 6–7. The training was group-based and inspired by El-Sistema. We compared the children in the music group with two comparison groups of children of the same socio-economic background, one involved in sports training, another not involved in any systematic training. Prior to participating, children who began training in music did not differ from those in the comparison groups in any of the assessed measures. After two years, we now observe that children in the music group, but not in the two comparison groups, show an enhanced ability to detect changes in tonal environment and an accelerated maturity of auditory processing as measured by cortical auditory evoked potentials to musical notes. Our results suggest that music training may result in stimulus specific brain changes in school aged children.
Kesim, Yesim F; Uzun, Gunes Altiokka; Yucesan, Emrah; Tuncer, Feyza N; Ozdemir, Ozkan; Bebek, Nerses; Ozbek, Ugur; Iseri, Sibel A Ugur; Baykan, Betul
Autosomal dominant lateral temporal lobe epilepsy (ADLTE) is an autosomal dominant epileptic syndrome characterized by focal seizures with auditory or aphasic symptoms. The same phenotype is also observed in a sporadic form of lateral temporal lobe epilepsy (LTLE), namely idiopathic partial epilepsy with auditory features (IPEAF). Heterozygous mutations in LGI1 account for up to 50% of ADLTE families and only rarely observed in IPEAF cases. In this study, we analysed a cohort of 26 individuals with LTLE diagnosed according to the following criteria: focal epilepsy with auditory aura and absence of cerebral lesions on brain MRI. All patients underwent clinical, neuroradiological and electroencephalography examinations and afterwards they were screened for mutations in LGI1 gene. The single LGI1 mutation identified in this study is a novel missense variant (NM_005097.2: c.1013T>C; p.Phe338Ser) observed de novo in a sporadic patient. This is the first study involving clinical analysis of a LTLE cohort from Turkey and genetic contribution of LGI1 to ADLTE phenotype. Identification of rare LGI1 gene mutations in sporadic cases supports diagnosis as ADTLE and draws attention to potential familial clustering of ADTLE in suggestive generations, which is especially important for genetic counselling. Copyright © 2015 Elsevier B.V. All rights reserved.
Au, Agnes; Lovegrove, Bill
In this study, we examined whether good auditory and good visual temporal processors were better than their poor counterparts on certain reading measures. Various visual and auditory temporal tasks were administered to 105 undergraduates. They read some phonologically regular pseudowords and irregular words that were presented sequentially in the same ("word" condition) and in different ("line" condition) locations. Results indicated that auditory temporal acuity was more relevant to reading, whereas visual temporal acuity was more relevant to spelling. Good auditory temporal processors did not have the advantage in processing pseudowords, even though pseudoword reading correlated significantly with auditory temporal processing. These results suggested that some higher cognitive or phonological processes mediated the relationship between auditory temporal processing and pseudoword reading. Good visual temporal processors did not have the advantage in processing irregular words. They also did not process the line condition more accurately than the word condition. The discrepancy might be attributed to the use of normal adults and the unnatural reading situation that did not fully capture the function of the visual temporal processes. The distributions of auditory and visual temporal processing abilities were co-occurring to some degree, but they maintained considerable independence. There was also a lack of a relationship between the type and severity of reading deficits and the type and number of temporal deficits.
Cardin, Jessica A; Raksin, Jonathan N; Schmidt, Marc F
Sensorimotor integration in the avian song system is crucial for both learning and maintenance of song, a vocal motor behavior. Although a number of song system areas demonstrate both sensory and motor characteristics, their exact roles in auditory and premotor processing are unclear. In particular, it is unknown whether input from the forebrain nucleus interface of the nidopallium (NIf), which exhibits both sensory and premotor activity, is necessary for both auditory and premotor processing in its target, HVC. Here we show that bilateral NIf lesions result in long-term loss of HVC auditory activity but do not impair song production. NIf is thus a major source of auditory input to HVC, but an intact NIf is not necessary for motor output in adult zebra finches.
Liebenthal, Einat; Möttönen, Riikka
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
Mattiuzzi, M.; Verbesselt, J.; Klisch, A.
The package functionalities are focused for the download and processing of multi-temporal datasets from MODIS sensors. All standard MODIS grid data can be accessed and processed by the package routines. The package is still in alpha development and not all the functionalities are available for now.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Plakke, Bethany; Romanski, Lizabeth M.
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Kovelman, Ioulia; Wagley, Neelima; Hay, Jessica S F; Ugolini, Margaret; Bowyer, Susan M; Lajiness-O'Neill, Renee; Brennan, Jonathan
New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development. © 2014 New York Academy of Sciences.
Ma, Xiaoran; McPherson, Bradley; Ma, Lian
Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P). However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR), P1-N1-P2 complex, and P300 component recording, were conducted. ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational disadvantage associated with cleft disorders, further research
Full Text Available Objectives Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P. However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft disorders. The present study utilized electrophysiological tests to assess the auditory status of a large group of children with NSCL/P, and investigated whether this group had less robust central auditory processing abilities compared to craniofacially normal children. Methods 146 children with NSCL/P who had normal peripheral hearing thresholds, and 60 craniofacially normal children aged from 6 to 15 years, were recruited. Electrophysiological tests, including auditory brainstem response (ABR, P1-N1-P2 complex, and P300 component recording, were conducted. Results ABR and N1 wave latencies were significantly prolonged in children with NSCL/P. An atypical developmental trend was found for long latency potentials in children with cleft compared to control group children. Children with unilateral cleft lip and palate showed a greater level of abnormal results compared with other cleft subgroups, whereas the cleft lip subgroup had the most robust responses for all tests. Conclusion Children with NSCL/P may have slower than normal neural transmission times between the peripheral auditory nerve and brainstem. Possible delayed development of myelination and synaptogenesis may also influence auditory processing function in this population. Present research outcomes were consistent with previous, smaller sample size, electrophysiological studies on infants and children with cleft lip/palate disorders. In view of the these findings, and reports of educational
Xiaoran Ma; Bradley McPherson; Lian Ma
Objectives Cleft lip and/or palate is a common congenital craniofacial malformation found worldwide. A frequently associated disorder is conductive hearing loss, and this disorder has been thoroughly investigated in children with non-syndromic cleft lip and/or palate (NSCL/P). However, analysis of auditory processing function is rarely reported for this population, although this issue should not be ignored since abnormal auditory cortical structures have been found in populations with cleft ...
Hakvoort, Britt; van der Leij, Aryan; Maurits, Natasha; Maassen, Ben; van Zuijen, Titia L
Less proficient basic auditory processing has been previously connected to dyslexia. However, it is unclear whether a low proficiency level is a correlate of having a familial risk for reading problems, or whether it causes dyslexia. In this study, children's processing of amplitude rise time (ART), intensity and frequency differences was measured with event-related potentials (ERPs). ERP components of interest are components reflective of auditory change detection; the mismatch negativity (MMN) and late discriminative negativity (LDN). All groups had an MMN to changes in ART and frequency, but not to intensity. Our results indicate that fluent readers at risk for dyslexia, poor readers at risk for dyslexia and fluent reading controls have an LDN to changes in ART and frequency, though the scalp activation of frequency processing was different for familial risk children. On intensity, only controls showed an LDN. Contrary to previous findings, our results suggest that neither ART nor frequency processing is related to reading fluency. Furthermore, our results imply that diminished sensitivity to changes in intensity and differential lateralization of frequency processing should be regarded as correlates of being at familial risk for dyslexia, that do not directly relate to reading fluency. Copyright © 2014 Elsevier Ltd. All rights reserved.
Full Text Available For multimodal Human-Computer Interaction (HCI, it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI which uses Electroencephalography (EEG and functional Near Infrared Spectroscopy (fNIRS to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. We present cross validation evaluation results for different classification conditions. We show that our subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6% and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy.
Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja
For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user is currently processing information. This would enable a system to select complementary output modalities to reduce the user's workload. In this paper, we develop a hybrid Brain-Computer Interface (BCI) which uses Electroencephalography (EEG) and functional Near Infrared Spectroscopy (fNIRS) to discriminate and detect visual and auditory stimulus processing. We describe the experimental setup we used for collection of our data corpus with 12 subjects. On this data, we performed cross-validation evaluation, of which we report accuracy for different classification conditions. The results show that the subject-dependent systems achieved a classification accuracy of 97.8% for discriminating visual and auditory perception processes from each other and a classification accuracy of up to 94.8% for detecting modality-specific processes independently of other cognitive activity. The same classification conditions could also be discriminated in a subject-independent fashion with accuracy of up to 94.6 and 86.7%, respectively. We also look at the contributions of the two signal types and show that the fusion of classifiers using different features significantly increases accuracy. PMID:25477777
Putze, Felix; Hesslinger, Sebastian; Tse, Chun-Yu; Huang, YunYing; Herff, Christian; Guan, Cuntai; Schultz, Tanja
For multimodal Human-Computer Interaction (HCI), it is very useful to identify the modalities on which the user i