Boscariol, Mirela; Casali, Raquel L; Amaral, M Isabel R; Lunardi, Luciane L; Matas, Carla G; Collela-Santos, M Francisca; Guerreiro, Marilisa M
Because of the relationship between rolandic, temporoparietal, and centrotemporal areas and language and auditory processing, the aim of this study was to investigate language and central temporal auditory processing of children with epilepsy (rolandic epilepsy and temporal lobe epilepsy) and compare these with those of children without epilepsy. Thirty-five children aged between eight and 14 years old were studied. Two groups of children participated in this study: a group with childhood epilepsy (n=19), and a control group without epilepsy or linguistic changes (n=16). There was a significant difference between the two groups, with the worst performance in children with epilepsy for the gaps-in-noise test, right ear (preceptive vocabulary (PPVT) (p<0.001) and phonological working memory (nonwords repetition task) tasks (p=0.001). We conclude that the impairment of central temporal auditory processing and language skills may be comorbidities in children with rolandic epilepsy and temporal lobe epilepsy. PMID:26580215
Full Text Available Speech perception is known to rely on both auditory and visual information. However, sound specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009. In the present study we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory-auditory interaction in speech perception. We examined the changes in event-related potentials in response to multisensory synchronous (simultaneous and asynchronous (90 ms lag and lead somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the event-related potential was reliably different from the two unisensory potentials. More importantly, the magnitude of the event-related potential difference varied as a function of the relative timing of the somatosensory-auditory stimulation. Event-related activity change due to stimulus timing was seen between 160-220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory-auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Bishop-Liebler, Paula; Welch, Graham; Huss, Martina; Thomson, Jennifer M; Goswami, Usha
The core cognitive difficulty in developmental dyslexia involves phonological processing, but adults and children with dyslexia also have sensory impairments. Impairments in basic auditory processing show particular links with phonological impairments, and recent studies with dyslexic children across languages reveal a relationship between auditory temporal processing and sensitivity to rhythmic timing and speech rhythm. As rhythm is explicit in music, musical training might have a beneficial effect on the auditory perception of acoustic cues to rhythm in dyslexia. Here we took advantage of the presence of musicians with and without dyslexia in musical conservatoires, comparing their auditory temporal processing abilities with those of dyslexic non-musicians matched for cognitive ability. Musicians with dyslexia showed equivalent auditory sensitivity to musicians without dyslexia and also showed equivalent rhythm perception. The data support the view that extensive rhythmic experience initiated during childhood (here in the form of music training) can affect basic auditory processing skills which are found to be deficient in individuals with dyslexia. PMID:25044949
Boets, Bart; Verhoeven, Judith; Wouters, Jan; Steyaert, Jean
We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM)…
Groen, W.B.; Orsouw, L. van; Huurne, N.; Swinkels, S.H.N.; Gaag, R.J. van der; Buitelaar, J.K.; Zwiers, M.P.
The perceptual pattern in autism has been related to either a specific localized processing deficit or a pathway-independent, complexity-specific anomaly. We examined auditory perception in autism using an auditory disembedding task that required spectral and temporal integration. 23 children with h
Full Text Available Background and Aim: Auditory temporal processing reveals an important aspect of auditory performance, in which a deficit can prevent the child from speaking, language learning and reading. Temporal resolution, which is a subgroup of temporal processing, can be evaluated by gap-in-noise detection test. Regarding the relation of auditory temporal processing deficits and phonologic disorder of children with dyslexia-dysgraphia, the aim of this study was to evaluate these children with the gap-in-noise (GIN test.Methods: The gap-in-noise test was performed on 28 normal and 24 dyslexic-dysgraphic children, at the age of 11-12 years old. Mean approximate threshold and percent of corrected answers were compared between the groups.Results: The mean approximate threshold and percent of corrected answers of the right and left ear had no significant difference between the groups (p>0.05. The mean approximate threshold of children with dyslexia-dysgraphia (6.97 ms, SD=1.09 was significantly (p<0.001 more than that of the normal group (5.05 ms, SD=0.92. The mean related frequency of corrected answers (58.05, SD=4.98% was less than normal group (69.97, SD=7.16% (p<0.001.Conclusion: Abnormal temporal resolution was found in children with dyslexia-dysgraphia based on gap-in-noise test. While the brainstem and auditory cortex are responsible for auditory temporal processing, probably the structural and functional differences of these areas in normal and dyslexic-dysgraphic children lead to abnormal coding of auditory temporal information. As a result, auditory temporal processing is inevitable.
Ceponiene, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A.; Townsend, J.
Pre-linguistic sensory deficits, especially in "temporal" processing, have been implicated in developmental language impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral…
Schrode, Katrina M; Bee, Mark A
Sensory systems function most efficiently when processing natural stimuli, such as vocalizations, and it is thought that this reflects evolutionary adaptation. Among the best-described examples of evolutionary adaptation in the auditory system are the frequent matches between spectral tuning in both the peripheral and central auditory systems of anurans (frogs and toads) and the frequency spectra of conspecific calls. Tuning to the temporal properties of conspecific calls is less well established, and in anurans has so far been documented only in the central auditory system. Using auditory-evoked potentials, we asked whether there are species-specific or sex-specific adaptations of the auditory systems of gray treefrogs (Hyla chrysoscelis) and green treefrogs (H. cinerea) to the temporal modulations present in conspecific calls. Modulation rate transfer functions (MRTFs) constructed from auditory steady-state responses revealed that each species was more sensitive than the other to the modulation rates typical of conspecific advertisement calls. In addition, auditory brainstem responses (ABRs) to paired clicks indicated relatively better temporal resolution in green treefrogs, which could represent an adaptation to the faster modulation rates present in the calls of this species. MRTFs and recovery of ABRs to paired clicks were generally similar between the sexes, and we found no evidence that males were more sensitive than females to the temporal modulation patterns characteristic of the aggressive calls used in male-male competition. Together, our results suggest that efficient processing of the temporal properties of behaviorally relevant sounds begins at potentially very early stages of the anuran auditory system that include the periphery. PMID:25617467
Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...
Elliott, Taffeta M; Christensen-Dalsgaard, Jakob; Kelley, Darcy B
Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of aud...... compute temporally selective receptive fields are described....
Nickisch, Andreas; Massinger, Claudia
Background/Aims: Specific language impairment (SLI) is believed to be associated with nonverbal auditory (NVA) deficits. It remains unclear, however, whether children with SLI show deficits in auditory time processing, time processing in general, frequency discrimination (FD), or NVA processing in general. Patients and Methods: Twenty-seven children (aged 8-11) with SLI and 27 control children (CG), matched for age and gender, were retrospectively compared with regard to their performance on ...
Čeponienė, R.; Cummings, A.; Wulfeck, B.; Ballantyne, A; Townsend, J.
Pre-linguistic sensory deficits, especially in “temporal” processing, have been implicated in developmental Language Impairment (LI). However, recent evidence has been equivocal with data suggesting problems in the spectral domain. The present study examined event-related potential (ERP) measures of auditory sensory temporal and spectral processing, and their interaction, in typical children and those with LI (7–17 years; n=25 per group). The stimuli were 3 CV syllables and 3 consonant-to-vow...
Carol Q Pham
Full Text Available Cochlear implant (CI listeners have difficulty understanding speech in complex listening environments. This deficit is thought to be largely due to peripheral encoding problems arising from current spread, which results in wide peripheral filters. In normal hearing (NH listeners, central processing contributes to segregation of speech from competing sounds. We tested the hypothesis that basic central processing abilities are retained in post-lingually deaf CI listeners, but processing is hampered by degraded input from the periphery. In eight CI listeners, we measured auditory nerve compound action potentials to characterize peripheral filters. Then, we measured psychophysical detection thresholds in the presence of multi-electrode maskers placed either inside (peripheral masking or outside (central masking the peripheral filter. This was intended to distinguish peripheral from central contributions to signal detection. Introduction of temporal asynchrony between the signal and masker improved signal detection in both peripheral and central masking conditions for all CI listeners. Randomly varying components of the masker created spectral-variance cues, which seemed to benefit only two out of eight CI listeners. Contrastingly, the spectral-variance cues improved signal detection in all five NH listeners who listened to our CI simulation. Together these results indicate that widened peripheral filters significantly hamper central processing of spectral-variance cues but not of temporal cues in post-lingually deaf CI listeners. As indicated by two CI listeners in our study, however, post-lingually deaf CI listeners may retain some central processing abilities similar to NH listeners.
Papakonstantinou, Alexandra; Strelcyk, Olaf; Dau, Torsten
This study investigates behavioural and objective measures of temporal auditory processing and their relation to the ability to understand speech in noise. The experiments were carried out on a homogeneous group of seven hearing-impaired listeners with normal sensitivity at low frequencies (up to...
Ongoing spontaneous activity in cortical circuits defines cortical states, but it still remains unclear how cortical states shape sensory processing across cortical laminae and what type of response properties emerge in the cortex. Recording neural activity from the auditory cortex (AC) and medial geniculate body (MGB) simultaneously with electrical stimulations of the basal forebrain (BF) in urethane-anesthetized rats, we investigated state-dependent spontaneous and auditory-evoked activitie...
Moore, Brian C J
Within the cochlea, broadband sounds like speech and music are filtered into a series of narrowband signals, each of which can be considered as a relatively slowly varying envelope (ENV) imposed on a rapidly oscillating carrier (the temporal fine structure, TFS). Information about ENV and TFS is conveyed in the timing and short-term rate of nerve spikes in the auditory nerve. There is evidence that both hearing loss and increasing age adversely affect the ability to use TFS information, but in many studies the effects of hearing loss and age have been confounded. This paper summarises evidence from studies that allow some separation of the effects of hearing loss and age. The results suggest that the monaural processing of TFS information, which is important for the perception of pitch and for segregating speech from background sounds, is adversely affected by both hearing loss and increasing age, the former being more important. The monaural processing of ENV information is hardly affected by hearing loss or by increasing age. The binaural processing of TFS information, which is important for sound localisation and the binaural masking level difference, is also adversely affected by both hearing loss and increasing age, but here the latter seems more important. The deterioration of binaural TFS processing with increasing age appears to start relatively early in life. The binaural processing of ENV information also deteriorates somewhat with increasing age. The reduced binaural processing abilities found for older/hearing-impaired listeners may partially account for the difficulties that such listeners experience in situations where the target speech and interfering sounds come from different directions in space, as is common in everyday life. PMID:27080640
Ludmilla Vilas Boas
Full Text Available Hearing has an important role in human development and social adaptation in blind people. OBJECTIVE: To evaluate the performance of temporal auditory processing in blind people; to characterize the temporal resolution ability; to characterize the temporal ordinance ability and to compare the performance of the study population in the applied tests. METHODS: Fifteen blind adults participated in this study. A cross-sectional study was undertaken; approval was obtained from the Pernambuco Catholic University Ethics Committee, no. 003/2008. RESULTS: Temporal auditory processing was excellent - the average composed threshold in the original RGDT version was 4. 98 ms; it was 50 ms for all frequencies in the expanded version. PPS and DPS results ranged from 95% to 100%. There were no quantitative differences in the comparison of tests; but oral reports suggested that the original RGDT original version was more difficult. CONCLUSIONS: The study sample performed well in temporal auditory processing; it also performed well in temporal resolution and ordinance abilitiesA audição exerce um papel importantíssimo no desenvolvimento e adaptação social dos pacientes cegos. OBJETIVOS: Avaliar o desempenho do processamento temporal de cegos; caracterizar a habilidade de resolução temporal, segundo tempo e frequência; a ordenação temporal de cegos usando o teste de padrão de frequência e comparar o desempenho da população estudada para os testes de processamento aplicados. METODOLOGIA: Participaram do estudo 12 adultos portadores de cegueira. O estudo foi do tipo transversal, aprovado pelo Comitê de Ética da Universidade Católica de Pernambuco sob nº 003/2008. Para a coleta de dados, foi utilizado o RGDT em suas duas versões e os testes de padrão de duração (TPD e de frequência (TPF. RESULTADOS: Foi evidenciado excelente desempenho para o processamento temporal, média de 4,98 para o limiar composto na versão original do RGDT e 50 ms de
Geiser, Eveline; Kjelgaard, Margaret; Christodoulou, Joanna A.; Cyr, Abigail; Gabrieli, John D. E.
Reading disability in children with dyslexia has been proposed to reflect impairment in auditory timing perception. We investigated one aspect of timing perception--"temporal grouping"--as present in prosodic phrase boundaries of natural speech, in age-matched groups of children, ages 6-8 years, with and without dyslexia. Prosodic phrase…
Shen, Dawei; Alain, Claude
Attentional blink (AB) describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe) nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20%) that the ...
Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.
The present thesis set out to investigate how sensory modality and spatial presentation influence visual and auditory duration judgments in the millisecond range. The effects of modality and spatial location were explored by considering right and left side presentations of mixed or blocked visual and auditory stimuli. Several studies have shown that perceived duration of a stimulus can be affected by various extra-temporal factors such as modality and spatial position. Audit...
Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoe...
Thwaites, Andrew; Nimmo-Smith, Ian; Fonteneau, Elisabeth; Patterson, Roy D.; Buttery, Paula; Marslen-Wilson, William D.
A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons), varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two set...
Full Text Available Hearing loss with increasing age adversely affects the ability to understand speech, an effect that results partly from reduced audibility. The aims of this study were to establish whether aging reduces speech intelligibility for listeners with normal audiograms, and, if so, to assess the relative contributions of auditory temporal and cognitive processing. Twenty-one older normal-hearing (ONH; 60-79 years participants with bilateral audiometric thresholds ≤ 20 dB HL at 0.125-6 kHz were matched to nine young (YNH; 18-27 years participants in terms of mean audiograms, years of education, and performance IQ. Measures included: (1 identification of consonants in quiet and in noise that was unmodulated or modulated at 5 or 80 Hz; (2 identification of sentences in quiet and in co-located or spatially separated two-talker babble; (3 detection of modulation of the temporal envelope (TE at frequencies 5-180 Hz; (4 monaural and binaural sensitivity to temporal fine structure (TFS; (5 various cognitive tests. Speech identification was worse for ONH than YNH participants in all types of background. This deficit was not reflected in self-ratings of hearing ability. Modulation masking release (improvement in speech identification obtained by amplitude modulating a noise background and spatial masking release (benefit obtained from spatially separating masker and target speech were not affected by age. Sensitivity to TE and TFS was lower for ONH than YNH participants, and was correlated positively with speech-in-noise (SiN identification. Many cognitive abilities were lower for ONH than YNH participants, and generally were correlated positively with SiN identification scores. The best predictors of the intelligibility of SiN were composite measures of cognition and TFS sensitivity. These results suggest that declines in speech perception in older persons are partly caused by cognitive and perceptual changes separate from age-related changes in audiometric
Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.
Hanson, Jessica L; Rose, Gary J; Leary, Christopher J; Graham, Jalina A; Alluri, Rishi K; Vasquez-Opazo, Gustavo A
In recently diverged gray treefrogs (Hyla chrysoscelis and H. versicolor), advertisement calls that differ primarily in pulse shape and pulse rate act as an important premating isolation mechanism. Temporally selective neurons in the anuran inferior colliculus may contribute to selective behavioral responses to these calls. Here we present in vivo extracellular and whole-cell recordings from long-interval-selective neurons (LINs) made during presentation of pulses that varied in shape and rate. Whole-cell recordings revealed that interplay between excitation and inhibition shapes long-interval selectivity. LINs in H. versicolor showed greater selectivity for slow-rise pulses, consistent with the slow-rise pulse characteristics of their calls. The steepness of pulse-rate tuning functions, but not the distributions of best pulse rates, differed between the species in a manner that depended on whether pulses had slow or fast-rise shape. When tested with stimuli representing the temporal structure of the advertisement calls of H. chrysoscelis or H. versicolor, approximately 27 % of LINs in H. versicolor responded exclusively to the latter stimulus type. The LINs of H. chrysoscelis were less selective. Encounter calls, which are produced at similar pulse rates in both species (≈5 pulses/s), are likely to be effective stimuli for the LINs of both species. PMID:26614093
Full Text Available For patients with pharmaco-resistant temporal epilepsy, unilateral anterior temporal lobectomy (ATL - i.e. the surgical resection of the hippocampus, the amygdala, the temporal pole and the most anterior part of the temporal gyri - is an efficient treatment. There is growing evidence that anterior regions of the temporal lobe are involved in the integration and short-term memorization of object-related sound properties. However, non-verbal auditory processing in patients with temporal lobe epilepsy (TLE has raised little attention. To assess non-verbal auditory cognition in patients with temporal epilepsy both before and after unilateral ATL, we developed a set of non-verbal auditory tests, including environmental sounds. We could evaluate auditory semantic identification, acoustic and object-related short-term memory, and sound extraction from a sound mixture. The performances of 26 TLE patients before and/or after ATL were compared to those of 18 healthy subjects. Patients before and after ATL were found to present with similar deficits in pitch retention, and in identification and short-term memorisation of environmental sounds, whereas not being impaired in basic acoustic processing compared to healthy subjects. It is most likely that the deficits observed before and after ATL are related to epileptic neuropathological processes. Therefore, in patients with drug-resistant TLE, ATL seems to significantly improve seizure control without producing additional auditory deficits.
Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.
Muñoz-López, M; Insausti, R; Mohedano-Moriano, A; Mishkin, M; Saunders, R C
Auditory recognition memory in non-human primates differs from recognition memory in other sensory systems. Monkeys learn the rule for visual and tactile delayed matching-to-sample within a few sessions, and then show one-trial recognition memory lasting 10-20 min. In contrast, monkeys require hundreds of sessions to master the rule for auditory recognition, and then show retention lasting no longer than 30-40 s. Moreover, unlike the severe effects of rhinal lesions on visual memory, such lesions have no effect on the monkeys' auditory memory performance. The anatomical pathways for auditory memory may differ from those in vision. Long-term visual recognition memory requires anatomical connections from the visual association area TE with areas 35 and 36 of the perirhinal cortex (PRC). We examined whether there is a similar anatomical route for auditory processing, or that poor auditory recognition memory may reflect the lack of such a pathway. Our hypothesis is that an auditory pathway for recognition memory originates in the higher order processing areas of the rostral superior temporal gyrus (rSTG), and then connects via the dorsolateral temporal pole to access the rhinal cortex of the medial temporal lobe. To test this, we placed retrograde (3% FB and 2% DY) and anterograde (10% BDA 10,000 mW) tracer injections in rSTG and the dorsolateral area 38 DL of the temporal pole. Results showed that area 38DL receives dense projections from auditory association areas Ts1, TAa, TPO of the rSTG, from the rostral parabelt and, to a lesser extent, from areas Ts2-3 and PGa. In turn, area 38DL projects densely to area 35 of PRC, entorhinal cortex (EC), and to areas TH/TF of the posterior parahippocampal cortex. Significantly, this projection avoids most of area 36r/c of PRC. This anatomical arrangement may contribute to our understanding of the poor auditory memory of rhesus monkeys. PMID:26041980
... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...
Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.
Full Text Available In this article, we review recent findings from our laboratory that auditory hallucinations in schizophrenia are internally generated speech mis-representations lateralized to the left superior temporal gyrus and sulcus. Such experiences are, moreover, not cognitively suppressed due to enhanced attention to the voices and failure of fronto-parietal executive control functions. An overview of diagnostic questionnaires for scoring of symptoms is presented, together with a review of behavioural, structural and functional MRI data. Functional imaging data have either shown increased or decreased activation depending on whether patients have been presented an external stimulus or not during scanning. Structural imaging data have shown reduction of grey matter density and volume in the same areas in the temporal lobe. The behavioral and neuroimaging findings are moreover hypothesized to be related to glutamate hypofunction in schizophrenia. We propose a model for the understanding of auditory hallucinations that trace the origin of auditory hallucinations to uncontrolled neuronal firing in the speech areas in the left temporal lobe, which is not suppressed by volitional cognitive control processes, due to dysfunctional fronto-parietal executive cortical networks.
Bidelman, Gavin M; Syed Khaja, Ameenuddin
Auditory filter theory dictates a physiological compromise between frequency and temporal resolution of cochlear signal processing. We examined neurophysiological correlates of these spectrotemporal tradeoffs in the human auditory system using auditory evoked brain potentials and psychophysical responses. Temporal resolution was assessed using scalp-recorded auditory brainstem responses (ABRs) elicited by paired clicks. The inter-click interval (ICI) between successive pulses was parameterized from 0.7 to 25 ms to map ABR amplitude recovery as a function of stimulus spacing. Behavioral frequency difference limens (FDLs) and auditory filter selectivity (Q10 of psychophysical tuning curves) were obtained to assess relations between behavioral spectral acuity and electrophysiological estimates of temporal resolvability. Neural responses increased monotonically in amplitude with increasing ICI, ranging from total suppression (0.7 ms) to full recovery (25 ms) with a temporal resolution of ∼3-4 ms. ABR temporal thresholds were correlated with behavioral Q10 (frequency selectivity) but not FDLs (frequency discrimination); no correspondence was observed between Q10 and FDLs. Results suggest that finer frequency selectivity, but not discrimination, is associated with poorer temporal resolution. The inverse relation between ABR recovery and perceptual frequency tuning demonstrates a time-frequency tradeoff between the temporal and spectral resolving power of the human auditory system. PMID:24793771
Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.
present topics on signal processing which are important in a specific area of acoustics. These will be of interest to specialists in these areas because they will be presented from their technical perspective, rather than a generic engineering approach to signal processing. Non-specialists, or specialists...
Mishra, Srikanta K; Panda, Manasa R
Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties. PMID:24264076
王晓怡; 卢洁; 李坤成; 张苗; 徐国庆; 舒华
Objective To explore the neural mechanism for auditory Chinese lexical processing in the left anterior temporal lobe (ATL) of the healthy participants with functional magnetic imaging (fMRI). Methods Fifteen right-handed healthy participants, including 5 males and 10 females, were asked to repeat the auditory words or judge whether the auditory items were semantically dangerous. AFNI was used to process fMRI data and localize functional areas and the difference in the anterior temporal lobe. Results The results revealed the phonological processing on auditory Chinese lexical information was located in the anterior superior temporal gyrus, and the semantic processing was located in the anterior middle temporal gyrus. There existed segregation between the phonological processing and the semantic processing of the auditory Chinese words. Conclusion There was the function of semantic integration in the ATL. Two pathways to semantic access include the direct pathway in the dorsal temporal lobe for repetition task and the indirect in ventral temporal lobe for semantic judgment task.%目的 探讨左侧颞前部在汉语听觉信息加工中的作用机制.方法 应用3.0T磁共振成像系统与标准头线圈对15名健康志愿者(男5名,女10名)进行功能磁共振成像(fMRI).要求受试者完成听觉复述任务和听觉语义危险判断任务.应用软件包AFNI分析两种听觉任务在左颞前部的任务功能定位及其差异.结果 正常成人听觉语义判断任务相比听觉复述任务更多激活左侧颞中回及颞下回前部,而听觉语音复述任务相比听觉语义判断任务更多激活左侧颞上回前部.结论 脑内存在左颞前部对汉语听觉语音语义信息加工的分离,颞上前部对语音分析更强,颞前中下部对语义分析更强.
McNorgan, Chris; Awati, Neha; Desroches, Amy S.; Booth, James R.
Literacy is a uniquely human cross-modal cognitive process wherein visual orthographic representations become associated with auditory phonological representations through experience. Developmental studies provide insight into how experience-dependent changes in brain organization influence phonological processing as a function of literacy. Previous investigations show a synchrony-dependent influence of letter presentation on individual phoneme processing in superior temporal sulcus; others d...
Cristina Ferraz Borges Murphy
Full Text Available Previous studies have investigated the effects of auditory temporal training on language disorders. Recently, the effects of new approaches, such as musical training and the use of software, have also been considered. To investigate the effects of different auditory temporal training approaches on language skills, we reviewed the available literature on musical training, the use of software and formal auditory training by searching the SciELO, MEDLINE, LILACS-BIREME and EMBASE databases. Study Design: Systematic review. Results: Using evidence levels I and II as the criteria, 29 of the 523 papers found were deemed relevant to one of the topics (use of software - 13 papers; formal auditory training - six papers; and musical training - 10 papers. Of the three approaches, studies that investigated the use of software and musical training had the highest levels of evidence; however, these studies also raised concerns about the hypothesized relationship between auditory temporal processing and language. Future studies are necessary to investigate the actual contribution of these three types of auditory temporal training to language skills.
Holt, Marla M.
Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Watson, Betty U.; Miller, Theodore K.
This study of 94 college undergraduates, including 24 with a reading disability, found that speech perception was strongly related to 3 of 4 phonological variables, including short-term and long-term auditory memory and phoneme segmentation, which were in turn strongly related to reading. Nonverbal temporal processing was not related to any…
Svirskis, Gytis; Dodla, Ramana; Rinzel, John
Many auditory neurons possess low-threshold potassium currents ( I(KLT)) that enhance their responsiveness to rapid and coincident inputs. We present recordings from gerbil medial superior olivary (MSO) neurons in vitro and modeling results that illustrate how I(KLT) improves the detection of brief signals, of weak signals in noise, and of the coincidence of signals (as needed for sound localization). We quantify the enhancing effect of I(KLT) on temporal processing with several measures: signal-to-noise ratio (SNR), reverse correlation or spike-triggered averaging of input currents, and interaural time difference (ITD) tuning curves. To characterize how I(KLT), which activates below spike threshold, influences a neuron's voltage rise toward threshold, i.e., how it filters the inputs, we focus first on the response to weak and noisy signals. Cells and models were stimulated with a computer-generated steady barrage of random inputs, mimicking weak synaptic conductance transients (the "noise"), together with a larger but still subthreshold postsynaptic conductance, EPSG (the "signal"). Reduction of I(KLT) decreased the SNR, mainly due to an increase in spontaneous firing (more "false positive"). The spike-triggered reverse correlation indicated that I(KLT) shortened the integration time for spike generation. I(KLT) also heightened the model's timing selectivity for coincidence detection of simulated binaural inputs. Further, ITD tuning is shifted in favor of a slope code rather than a place code by precise and rapid inhibition onto MSO cells (Brand et al. 2002). In several ways, low-threshold outward currents are seen to shape integration of weak and strong signals in auditory neurons. PMID:14669013
... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...
Mishra, Srikanta K; Panda, Manasa R
The rapid initial phase of training-induced improvement has been shown to reflect a genuine sensory change in perception. Several features of early and rapid learning, such as generalization and stability, remain to be characterized. The present study demonstrated that learning effects from brief training on a temporal gap detection task using spectrally similar narrowband noise markers defining the gap (within-channel task), transfer across ears, however, not across spectrally dissimilar markers (between-channel task). The learning effects associated with brief training on a gap detection task were found to be stable for at least a day. These initial findings have significant implications for characterizing early and rapid learning effects. PMID:27475211
Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer
Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole. PMID:20881120
Hartley, Douglas E. H.; Moore, David R.
The ``temporal processing hypothesis'' suggests that individuals with specific language impairments (SLIs) and dyslexia have severe deficits in processing rapidly presented or brief sensory information, both within the auditory and visual domains. This hypothesis has been supported through evidence that language-impaired individuals have excess auditory backward masking. This paper presents an analysis of masking results from several studies in terms of a model of temporal resolution. Results from this modeling suggest that the masking results can be better explained by an ``auditory efficiency'' hypothesis. If impaired or immature listeners have a normal temporal window, but require a higher signal-to-noise level (poor processing efficiency), this hypothesis predicts the observed small deficits in the simultaneous masking task, and the much larger deficits in backward and forward masking tasks amongst those listeners. The difference in performance on these masking tasks is predictable from the compressive nonlinearity of the basilar membrane. The model also correctly predicts that backward masking (i) is more prone to training effects, (ii) has greater inter- and intrasubject variability, and (iii) increases less with masker level than do other masking tasks. These findings provide a new perspective on the mechanisms underlying communication disorders and auditory masking.
Sugano, Yoshimori; Keetels, Mirjam; Vroomen, Jean
Perception of synchrony between one's own action (e.g. a finger tap) and the sensory feedback thereof (e.g. a flash or click) can be shifted after exposure to an induced delay (temporal recalibration effect, TRE). It remains elusive, however, whether the same mechanism underlies motor-visual (MV) and motor-auditory (MA) TRE. We examined this by measuring crosstalk between MV- and MA-delayed feedbacks. During an exposure phase, participants pressed a mouse at a constant pace while receiving visual or auditory feedback that was either delayed (+150 ms) or subjectively synchronous (+50 ms). During a post-test, participants then tried to tap in sync with visual or auditory pacers. TRE manifested itself as a compensatory shift in the tap-pacer asynchrony (a larger anticipation error after exposure to delayed feedback). In experiment 1, MA and MV feedback were either both synchronous (MV-sync and MA-sync) or both delayed (MV-delay and MA-delay), whereas in experiment 2, different delays were mixed across alternating trials (MV-sync and MA-delay or MV-delay and MA-sync). Exposure to consistent delays induced equally large TREs for auditory and visual pacers with similar build-up courses. However, with mixed delays, we found that synchronized sounds erased MV-TRE, but synchronized flashes did not erase MA-TRE. These results suggest that similar mechanisms underlie MA- and MV-TRE, but that auditory feedback is more potent than visual feedback to induce a rearrangement of motor-sensory timing. PMID:26610349
Parving, A; Salomon, G; Elberling, Claus;
An investigation of the middle components of the auditory evoked response (10--50 msec post-stimulus) in a patient with auditory agnosia is reported. Bilateral temporal lobe infarctions were proved by means of brain scintigraphy, CAT scanning, and regional cerebral blood flow measurements. The mi...
A study on the dynamic exploration of the auditory pathway is presented, in which technetium-99m hexamethylpropylene amine oxime single-photon emission computed tomography (SPET) was used in volunteers with normal hearing. Changes in 99mTc-HMPAO distribution were calculated using a region of interest/whole-brain count ratio. The results showed a temporal perfusion increment of 17% (right) and 19% (left) during tonal supraliminar stimulation, which was significantly different from the control ROI. Sensitivity tests for the method were requested before any clinical application. (orig.)
Ozmeral, Erol J; Eddins, Ann C; Frisina, D Robert; Eddins, David A
The auditory system relies on extraordinarily precise timing cues for the accurate perception of speech, music, and object identification. Epidemiological research has documented the age-related progressive decline in hearing sensitivity that is known to be a major health concern for the elderly. Although smaller investigations indicate that auditory temporal processing also declines with age, such measures have not been included in larger studies. Temporal gap detection thresholds (TGDTs; an index of auditory temporal resolution) measured in 1071 listeners (aged 18-98 years) were shown to decline at a minimum rate of 1.05 ms (15%) per decade. Age was a significant predictor of TGDT when controlling for audibility (partial correlation) and when restricting analyses to persons with normal-hearing sensitivity (n = 434). The TGDTs were significantly better for males (3.5 ms; 51%) than females when averaged across the life span. These results highlight the need for indices of temporal processing in diagnostics, as treatment targets, and as factors in models of aging. PMID:27255816
Li, Weifeng; Chen, Ziyi; Yan, Nan; Jones, Jeffery A.; Guo, Zhiqiang; Huang, Xiyan; Chen, Shaozhen; Liu, Peng; Liu, Hanjun
Temporal lobe epilepsy (TLE) is the most common drug-refractory focal epilepsy in adults. Previous research has shown that patients with TLE exhibit decreased performance in listening to speech sounds and deficits in the cortical processing of auditory information. Whether TLE compromises auditory-motor integration for voice control, however, remains largely unknown. To address this question, event-related potentials (ERPs) and vocal responses to vocal pitch errors (1/2 or 2 semitones upward) heard in auditory feedback were compared across 28 patients with TLE and 28 healthy controls. Patients with TLE produced significantly larger vocal responses but smaller P2 responses than healthy controls. Moreover, patients with TLE exhibited a positive correlation between vocal response magnitude and baseline voice variability and a negative correlation between P2 amplitude and disease duration. Graphical network analyses revealed a disrupted neuronal network for patients with TLE with a significant increase of clustering coefficients and path lengths as compared to healthy controls. These findings provide strong evidence that TLE is associated with an atypical integration of the auditory and motor systems for vocal pitch regulation, and that the functional networks that support the auditory-motor processing of pitch feedback errors differ between patients with TLE and healthy controls. PMID:27356768
Kanai, Kenichi; Ikeda, Kazuo; Tayama, Tadayuki
This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which wa...
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Bigelow, James; Ng, Chi-Wing; Poremba, Amy
Dorsal temporal pole (dTP) is a cortical region at the rostral end of the superior temporal gyrus that forms part of the ventral auditory object processing pathway. Anatomical connections with frontal and medial temporal areas, as well as a recent single-unit recording study, suggest this area may be an important part of the network underlying auditory working memory (WM). To further elucidate the role of dTP in auditory WM, local field potentials (LFPs) were recorded from the left dTP region of two rhesus macaques during an auditory delayed matching-to-sample (DMS) task. Sample and test sounds were separated by a 5-s retention interval, and a behavioral response was required only if the sounds were identical (match trials). Sensitivity of auditory evoked responses in dTP to behavioral significance and context was further tested by passively presenting the sounds used as auditory WM memoranda both before and after the DMS task. Average evoked potentials (AEPs) for all cue types and phases of the experiment comprised two small-amplitude early onset components (N20, P40), followed by two broad, large-amplitude components occupying the remainder of the stimulus period (N120, P300), after which a final set of components were observed following stimulus offset (N80OFF, P170OFF). During the DMS task, the peak amplitude and/or latency of several of these components depended on whether the sound was presented as the sample or test, and whether the test matched the sample. Significant differences were also observed among the DMS task and passive exposure conditions. Comparing memory-related effects in the LFP signal with those obtained in the spiking data raises the possibility some memory-related activity in dTP may be locally produced and actively generated. The results highlight the involvement of dTP in auditory stimulus identification and recognition and its sensitivity to the behavioral significance of sounds in different contexts. This article is part of a Special
Brewer, Carmen C; Zalewski, Christopher K; King, Kelly A; Zobay, Oliver; Riley, Alison; Ferguson, Melanie A; Bird, Jonathan E; McCabe, Margaret M; Hood, Linda J; Drayna, Dennis; Griffith, Andrew J; Morell, Robert J; Friedman, Thomas B; Moore, David R
Recent insight into the genetic bases for autism spectrum disorder, dyslexia, stuttering, and language disorders suggest that neurogenetic approaches may also reveal at least one etiology of auditory processing disorder (APD). A person with an APD typically has difficulty understanding speech in background noise despite having normal pure-tone hearing sensitivity. The estimated prevalence of APD may be as high as 10% in the pediatric population, yet the causes are unknown and have not been explored by molecular or genetic approaches. The aim of our study was to determine the heritability of frequency and temporal resolution for auditory signals and speech recognition in noise in 96 identical or fraternal twin pairs, aged 6-11 years. Measures of auditory processing (AP) of non-speech sounds included backward masking (temporal resolution), notched noise masking (spectral resolution), pure-tone frequency discrimination (temporal fine structure sensitivity), and nonsense syllable recognition in noise. We provide evidence of significant heritability, ranging from 0.32 to 0.74, for individual measures of these non-speech-based AP skills that are crucial for understanding spoken language. Identification of specific heritable AP traits such as these serve as a basis to pursue the genetic underpinnings of APD by identifying genetic variants associated with common AP disorders in children and adults. PMID:26883091
Full Text Available No other modality is more frequently represented in the prefrontal cortex than the auditory, but the role of auditory information in prefrontal functions is not well understood. Pathways from auditory association cortices reach distinct sites in the lateral, orbital, and medial surfaces of the prefrontal cortex in rhesus monkeys. Among prefrontal areas, frontopolar area 10 has the densest interconnections with auditory association areas, spanning a large antero-posterior extent of the superior temporal gyrus from the temporal pole to auditory parabelt and belt regions. Moreover, auditory pathways make up the largest component of the extrinsic connections of area 10, suggesting a special relationship with the auditory modality. Here we review anatomic evidence showing that frontopolar area 10 is indeed the main frontal “auditory field” as the major recipient of auditory input in the frontal lobe and chief source of output to auditory cortices. Area 10 is thought to be the functional node for the most complex cognitive tasks of multitasking and keeping track of information for future decisions. These patterns suggest that the auditory association links of area 10 are critical for complex cognition. The first part of this review focuses on the organization of prefrontal-auditory pathways at the level of the system and the synapse, with a particular emphasis on area 10. Then we explore ideas on how the elusive role of area 10 in complex cognition may be related to the specialized relationship with auditory association cortices.
Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten
A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...... modulation filterbank, a constant-variance internal noise, and an optimal detector stage. The model was evaluated in experimental conditions that reflect, to a different degree, effects of compression as well as spectral and temporal resolution in auditory processing. The experiments include intensity...
Liberman, Tamara; Velluti, Ricardo A; Pedemonte, Marisa
The hippocampal theta rhythm is associated with the processing of sensory systems such as touch, smell, vision and hearing, as well as with motor activity, the modulation of autonomic processes such as cardiac rhythm, and learning and memory processes. The discovery of temporal correlation (phase locking) between the theta rhythm and both visual and auditory neuronal activity has led us to postulate the participation of such rhythm in the temporal processing of sensory information. In addition, changes in attention can modify both the theta rhythm and the auditory and visual sensory activity. The present report tested the hypothesis that the temporal correlation between auditory neuronal discharges in the inferior colliculus central nucleus (ICc) and the hippocampal theta rhythm could be enhanced by changes in sensory stimulation. We presented chronically implanted guinea pigs with auditory stimuli that varied over time, and recorded the auditory response during wakefulness. It was observed that the stimulation shifts were capable of producing the temporal phase correlations between the theta rhythm and the ICc unit firing, and they differed depending on the stimulus change performed. Such correlations disappeared approximately 6 s after the change presentation. Furthermore, the power of the hippocampal theta rhythm increased in half of the cases presented with a stimulation change. Based on these data, we propose that the degree of correlation between the unitary activity and the hippocampal theta rhythm varies with--and therefore may signal--stimulus novelty. PMID:19716364
Bizley, Jennifer K.; King, Andrew J
Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...
Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.
audibility when embedded in similar background interferers, a phenomenon referred to as comodulation masking release (CMR). Knowledge of the auditory processing of amplitude modulations provides therefore crucial information for a better understanding of how the auditory system analyses acoustic scenes. The......Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance their...
List, Alexandra; Justus, Timothy
Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...
Meng, Xiangzhi; Sai, Xiaoguang; Wang, Cixin; Wang, Jue; Sha, Shuying; Zhou, Xiaolin
By measuring behavioural performance and event-related potentials (ERPs) this study investigated the extent to which Chinese school children's reading development is influenced by their skills in auditory, speech, and temporal processing. In Experiment 1, 102 normal school children's performance in pure tone temporal order judgment, tone frequency discrimination, temporal interval discrimination and composite tone pattern discrimination was measured. Results showed that children's auditory processing skills correlated significantly with their reading fluency, phonological awareness, word naming latency, and the number of Chinese characters learned. Regression analyses found that tone temporal order judgment, temporal interval discrimination and composite tone pattern discrimination could account for 32% of variance in phonological awareness. Controlling for the effect of phonological awareness, auditory processing measures still contributed significantly to variance in reading fluency and character naming. In Experiment 2, mismatch negativities (MMN) in event-related brain potentials were recorded from dyslexic children and the matched normal children, while these children listened passively to Chinese syllables and auditory stimuli composed of pure tones. The two groups of children did not differ in MMN to stimuli deviated in pure tone frequency and Chinese lexical tones. But dyslexic children showed smaller MMN to stimuli deviated in initial consonants or vowels of Chinese syllables and to stimuli deviated in temporal information of composite tone patterns. These results suggested that Chinese dyslexic children have deficits in auditory temporal processing as well as in linguistic processing and that auditory and temporal processing is possibly as important to reading development of children in a logographic writing system as in an alphabetic system. PMID:16355749
Irvine, Dexter R. F.
The past 20 years have seen substantial changes in our view of the nature of the processing carried out in auditory cortex. Some processing of a cognitive nature, previously attributed to higher order “association” areas, is now considered to take place in auditory cortex itself. One argument adduced in support of this view is the evidence indicating a remarkable degree of plasticity in the auditory cortex of adult animals. Such plasticity has been demonstrated in a wide range of paradigms, i...
Mei, Hui-Xian; Cheng, Liang; Chen, Qi-Cai
In the auditory pathway, the inferior colliculus (IC) is a major center for temporal and spectral integration of auditory information. There are widespread neural interactions in unilateral (one) IC and between bilateral (two) ICs that could modulate auditory signal processing such as the amplitude and frequency selectivity of IC neurons. These neural interactions are either inhibitory or excitatory, and are mostly mediated by γ-aminobutyric acid (GABA) and glutamate, respectively. However, the majority of interactions are inhibitory while excitatory interactions are in the minority. Such unbalanced properties between excitatory and inhibitory projections have an important role in the formation of unilateral auditory dominance and sound location, and the neural interaction in one IC and between two ICs provide an adjustable and plastic modulation pattern for auditory signal processing. PMID:23626523
Ding, Nai; Simon, Jonathan Z.
Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, fFM ≈ 40 Hz, and amplitude modulation (AM) at a slow rate, fAM
Teki, Sundeep; Barascud, Nicolas; Picard, Samuel; Payne, Christopher; Griffiths, Timothy D; Chait, Maria
To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence-the coincidence of sound elements in and across time-is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals ("stochastic figure-ground": SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as "figures" popping out of a stochastic "ground." Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the "figure" from the randomly varying "ground." Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the "classic" auditory system, is also involved in the early stages of auditory scene analysis." PMID:27325682
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291
Khavarghazalani, Bahare; Farahani, Farhad; Emadi, Maryam; Hosseni Dastgerdi, Zahra
Conclusion The study results indicate that children with a history of otitis media with effusion (OME) suffer from auditory processing disorder to some degree. The findings support the hypothesis that fluctuating hearing loss may affect central auditory processing during critical periods. Objectives Evidence suggests that prolonged OME in children can result in an auditory processing disorder, presumably because hearing has been disrupted during an important developmental period. A lack of auditory stimulation leads to the abnormal development of the hearing pathways in the brain. The aim of the present study was to determine the effects of OME on binaural auditory function and auditory temporal processing. Method In the present study, the dichotic digit test (DDT) was used for binaural hearing, and the gap in noise (GIN) test was used to evaluate temporal hearing processing. Results The average values of GIN differed significantly between children with a history of OME and normal controls (p < 0.001). The mean values of the DDT score were significantly different between the two groups (p = 0.002). PMID:26881324
Granier-Deferre, Carolyn; Ribeiro, Aurelie; Jacquet, Anne-Yvonne; Bassereau, Sophie
The perception of speech and music requires processing of variations in spectra and amplitude over different time intervals. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, but whether they can process complex auditory streams, such as speech sequences and more specifically their temporal variations, fast or…
Hautus, Michael J; Setchell, Gregory J; Waldie, Karen E; Kirk, Ian J
Individuals with developmental dyslexia show impairments in processing that require precise timing of sensory events. Here, we show that in a test of auditory temporal acuity (a gap-detection task) children ages 6-9 years with dyslexia exhibited a significant deficit relative to age-matched controls. In contrast, this deficit was not observed in groups of older reading-impaired individuals (ages 10-11 years; 12-13 years) or in adults (ages 23-25 years). It appears, therefore, that early temporal resolution deficits in those with reading impairments may significantly ameliorate over time. However, the occurrence of an early deficit in temporal acuity may be antecedent to other language-related perceptual problems (particularly those related to phonological processing) that persist after the primary deficit has resolved. This result suggests that if remedial interventions targeted at temporal resolution deficits are to be effective, the early detection of the deficit and early application of the remedial programme is especially critical. PMID:12625375
Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed
Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.
Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice
Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932
Flávia Duarte Liporaci
Full Text Available OBJETIVO: Avaliar o processamento auditivo em idosos por meio do teste de resolução temporal Gaps in Noise e verificar se a presença de perda auditiva influencia no desempenho nesse teste. MÉTODOS: Sessenta e cinco ouvintes idosos, entre 60 e 79 anos, foram avaliados por meio do teste Gaps In Noise. Para seleção da amostra foram realizados: anamnese, mini-exame do estado mental e avaliação audiológica básica. Os participantes foram alocados e estudados em um grupo único e posteriormente divididos em três grupos segundo os resultados audiométricos nas frequências de 500 Hz, 1, 2, 3, 4 e 6 kHz. Assim, classificou-se o G1 com audição normal, o G2 com perda auditiva de grau leve e o G3 com perda auditiva de grau moderado. RESULTADOS: Em toda a amostra, as médias de limiar de detecção de gap e de porcentagem de acertos foram de 8,1 ms e 52,6% para a orelha direita e de 8,2 ms e 52,2% para a orelha esquerda. No G1, estas medidas foram de 7,3 ms e 57,6% para a orelha direita e de 7,7 ms e 55,8% para a orelha esquerda. No G2, estas medidas foram de 8,2 ms e 52,5% para a orelha direita e de 7,9 ms e 53,2% para a orelha esquerda. No G3, estas medidas foram de 9,2 ms e 45,2% para as orelhas direita e esquerda. CONCLUSÃO: A presença de perda auditiva elevou os limiares de detecção de gap e diminuiu a porcentagem de acertos no teste Gaps In Noise.PURPOSE: To assess the auditory processing of elderly patients using the temporal resolution Gaps-in-Noise test, and to verify if the presence of hearing loss influences the performance on this test. METHODS: Sixty-five elderly listeners, with ages between 60 and 79 years, were assessed with the Gaps-in-Noise test. To meet the inclusion criteria, the following procedures were carried out: anamnesis, mini-mental state examination, and basic audiological evaluation. The participants were allocated and studied as a group, and then were divided into three groups, according to audiometric results
Gutschalk, Alexander; Uppenkamp, Stefan; Riedel, Bernhard; Bartsch, Andreas; Brandt, Tobias; Vogt-Schaden, Marlies
Based on results from functional imaging, cortex along the superior temporal sulcus (STS) has been suggested to subserve phoneme and pre-lexical speech perception. For vowel classification, both superior temporal plane (STP) and STS areas have been suggested relevant. Lesion of bilateral STS may conversely be expected to cause pure word deafness and possibly also impaired vowel classification. Here we studied a patient with bilateral STS lesions caused by ischemic strokes and relatively intact medial STPs to characterize the behavioral consequences of STS loss. The patient showed severe deficits in auditory speech perception, whereas his speech production was fluent and communication by written speech was grossly intact. Auditory-evoked fields in the STP were within normal limits on both sides, suggesting that major parts of the auditory cortex were functionally intact. Further studies showed that the patient had normal hearing thresholds and only mild disability in tests for telencephalic hearing disorder. Prominent deficits were discovered in an auditory-object classification task, where the patient performed four standard deviations below the control group. In marked contrast, performance in a vowel-classification task was intact. Auditory evoked fields showed enhanced responses for vowels compared to matched non-vowels within normal limits. Our results are consistent with the notion that cortex along STS is important for auditory speech perception, although it does not appear to be entirely speech specific. Formant analysis and single vowel classification, however, appear to be already implemented in auditory cortex on the STP. PMID:26343343
Favrot, Sylvain Emmanuel
A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control of...... the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been...
Miller, Carol A.
Purpose: The purpose of this article is to provide information that will assist readers in understanding and interpreting research literature on the role of auditory processing in communication disorders. Method: A narrative review was used to summarize and synthesize the literature on auditory processing deficits in children with auditory…
We study limits for the detection and estimation of weak sinusoidal signals in the primary part of the mammalian auditory system using a stochastic Fitzhugh-Nagumo (FHN) model and an action-reaction model for synaptic plasticity. Our overall model covers the chain from a hair cell to a point just after the synaptic connection with a cell in the cochlear nucleus. The information processing performance of the system is evaluated using so called phi-divergences from statistics which quantify a dissimilarity between probability measures and are intimately related to a number of fundamental limits in statistics and information theory (IT). We show that there exists a set of parameters that can optimize several important phi-divergences simultaneously and that this set corresponds to a constant quiescent firing rate (QFR) of the spiral ganglion neuron. The optimal value of the QFR is frequency dependent but is essentially independent of the amplitude of the signal (for small amplitudes). Consequently, optimal proce...
Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten
The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422...... activity across frequency. Using this approach, the described model is able to quantitatively account for classical streaming phenomena relying on frequency separation and tone presentation rate, such as the temporal coherence boundary and the fission boundary [L. P. A. S. van Noorden, doctoral...... dissertation, Institute for Perception Research, Eindhoven, NL, (1975)]. The same model also accounts for the perceptual grouping of distant spectral components in the case of synchronous presentation. The most essential components of the front-end and back-end processing in the framework of the presented...
Murphy, Cristina F B; Rabelo, Camila M; Silagi, Marcela L; Mansur, Letícia L; Schochat, Eliane
Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor "years of schooling" was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills. PMID:27013958
Atcherson, Samuel R; Nagaraj, Naveen K; Kennett, Sarah E W; Levisee, Meredith
Although there are many reported age-related declines in the human body, the notion that a central auditory processing deficit exists in older adults has not always been clear. Hearing loss and both structural and functional central nervous system changes with advancing age are contributors to how we listen, hear, and process auditory information. Even older adults with normal or near normal hearing sensitivity may exhibit age-related central auditory processing deficits as measured behaviorally and/or electrophysiologically. The purpose of this article is to provide an overview of assessment and rehabilitative approaches for central auditory processing deficits in older adults. It is hoped that the outcome of the information presented here will help clinicians with older adult patients who do not exhibit the typical auditory processing behaviors exhibited by others at the same age and with comparable hearing sensitivity all in the absence of other health-related conditions. PMID:27516715
Full Text Available The neural response to a stimulus is influenced by endogenous factors such as expectation and attention. Current research suggests that expectation and attention exert their effects in opposite directions, where expectation decreases neural activity in sensory areas, while attention increases it. However, expectation and attention are usually studied either in isolation or confounded with each other. A recent study suggests that expectation and attention may act jointly on sensory processing, by increasing the neural response to expected events when they are attended, but decreasing it when they are unattended. Here we test this hypothesis in an auditory temporal cueing paradigm using magnetoencephalography in humans. In our study participants attended to, or away from, tones that could arrive at expected or unexpected moments. We found a decrease in auditory beta band synchrony to expected (versus unexpected tones if they were unattended, but no difference if they were attended. Modulations in beta power were already evident prior to the expected onset times of the tones. These findings suggest that expectation and attention jointly modulate sensory processing.
Bailey, Frank S.; Yocum, Russell G.
The purpose of this personal experience as a narrative investigation is to describe how an auditory processing learning disability exacerbated--and how spirituality and religiosity relieved--suicidal ideation, through the lived experiences of an individual born and raised in the United States. The study addresses: (a) how an auditory processing…
Talita Fortunato-Tavares; Caroline Nunes Rocha; Claudia Regina Furquim de Andrade; Débora Maria Befi-Lopes; Eliane Schochat; Arild Hestvik; Schwartz, Richard G.
TEMA: diversos estudos sugerem a associação do distúrbio específico de linguagem (DEL) ao déficit no processamento auditivo. Pesquisas fornecem evidência de que a discriminação de estímulos breves estaria comprometida em crianças com DEL. Este déficit levaria a dificuldades em desenvolver habilidades fonológicas necessárias para mapear fonemas e decodificar e codificar palavras e frases efetiva e automaticamente. Entretanto, a correlação entre processamento temporal (PT)e distúrbios de lingua...
Koravand, Amineh; Jutras, Benoit
Purpose: The objective was to assess auditory sequential organization (ASO) ability in children with and without hearing loss. Method: Forty children 9 to 12 years old participated in the study: 12 with sensory hearing loss (HL), 12 with central auditory processing disorder (CAPD), and 16 with normal hearing. They performed an ASO task in which…
Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael
Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations. PMID:27310812
Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.
Anegawa, T; Hara, K; Yamamoto, K; Matsuda, M
. The Wechsler adult intelligence scale revealed a verbal IQ of 91 and a performance IQ of 100. Pure tone audiometry revealed bilateral, mild peripheral sensorineural hearing loss. Brainstem auditory evoked potentials were unrevealing. The EEG showed slow activities in the left temporoparietal region. Magnetic resonance imaging of the brain failed to reveal any relevant abnormalities except for an old hemorrhagic parietal infarct. The SPECT with Tc99m-HMPAO, however, showed reduced blood flow in the left temporal lobe including the first temporal convolution as well as in the left parietal lobe. Based on the SPECT findings, unilateral auditory hallucinations in our patient are considered to have resulted from the left temporal lobe ischemia. Our case indicates that unilateral auditory hallucinations may have a clinicoanatomical correlation with contralateral temporal lobe lesions. PMID:8821499
Recanzone, Gregg H.
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
Berns, Gregory S; Cook, Peter F; Foxley, Sean; Jbabdi, Saad; Miller, Karla L; Marino, Lori
The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this position induced by expansion of 'associative' regions in lateral and caudal directions. However, the precise location of the auditory cortex and its connections are still unknown. Here, we used a novel diffusion tensor imaging (DTI) sequence in archival post-mortem brains of a common dolphin (Delphinus delphis) and a pantropical dolphin (Stenella attenuata) to map their sensory and motor systems. Using thalamic parcellation based on traditionally defined regions for the primary visual (V1) and auditory cortex (A1), we found distinct regions of the thalamus connected to V1 and A1. But in addition to suprasylvian-A1, we report here, for the first time, the auditory cortex also exists in the temporal lobe, in a region near cetacean-A2 and possibly analogous to the primary auditory cortex in related terrestrial mammals (Artiodactyla). Using probabilistic tract tracing, we found a direct pathway from the inferior colliculus to the medial geniculate nucleus to the temporal lobe near the sylvian fissure. Our results demonstrate the feasibility of post-mortem DTI in archival specimens to answer basic questions in comparative neurobiology in a way that has not previously been possible and shows a link between the cetacean auditory system and those of terrestrial mammals. Given that fresh cetacean specimens are relatively rare, the ability to measure connectivity in archival specimens opens up a plethora of possibilities for investigating neuroanatomy in cetaceans and other species
Full Text Available BACKGROUND: We ordinarily perceive our voice sound as occurring simultaneously with vocal production, but the sense of simultaneity in vocalization can be easily interrupted by delayed auditory feedback (DAF. DAF causes normal people to have difficulty speaking fluently but helps people with stuttering to improve speech fluency. However, the underlying temporal mechanism for integrating the motor production of voice and the auditory perception of vocal sound remains unclear. In this study, we investigated the temporal tuning mechanism integrating vocal sensory and voice sounds under DAF with an adaptation technique. METHODS AND FINDINGS: Participants produced a single voice sound repeatedly with specific delay times of DAF (0, 66, 133 ms during three minutes to induce 'Lag Adaptation'. They then judged the simultaneity between motor sensation and vocal sound given feedback. We found that lag adaptation induced a shift in simultaneity responses toward the adapted auditory delays. This indicates that the temporal tuning mechanism in vocalization can be temporally recalibrated after prolonged exposure to delayed vocal sounds. Furthermore, we found that the temporal recalibration in vocalization can be affected by averaging delay times in the adaptation phase. CONCLUSIONS: These findings suggest vocalization is finely tuned by the temporal recalibration mechanism, which acutely monitors the integration of temporal delays between motor sensation and vocal sound.
Michael Gregory Heinz
Full Text Available While changes in cochlear frequency tuning are thought to play an important role in the perceptual difficulties of people with sensorineural hearing loss (SNHL, the possible role of temporal processing deficits remains less clear. Our knowledge of temporal envelope coding in the impaired cochlea is limited to two studies that examined auditory-nerve fiber responses to narrowband amplitude modulated stimuli. In the present study, we used Wiener-kernel analyses of auditory-nerve fiber responses to broadband Gaussian noise in anesthetized chinchillas to quantify changes in temporal envelope coding with noise-induced SNHL. Temporal modulation transfer functions (TMTFs and temporal windows of sensitivity to acoustic stimulation were computed from 2nd-order Wiener kernels and analyzed to estimate the temporal precision, amplitude, and latency of envelope coding. Noise overexposure was associated with slower (less negative TMTF roll-off with increasing modulation frequency and reduced temporal window duration. The results show that at equal stimulus sensation level, SNHL increases the temporal precision of envelope coding by 20-30%. Furthermore, SNHL increased the amplitude of envelope coding by 50% in fibers with CFs from 1-2 kHz and decreased mean response latency by 0.4 ms. While a previous study of envelope coding demonstrated a similar increase in response amplitude, the present study is the first to show enhanced temporal precision. This new finding may relate to the use of a more complex stimulus with broad frequency bandwidth and a dynamic temporal envelope. Exaggerated neural coding of fast envelope modulations may contribute to perceptual difficulties in people with SNHL by acting as a distraction from more relevant acoustic cues, especially in fluctuating background noise. Finally, the results underscore the value of studying sensory systems with more natural, real-world stimuli.
Full Text Available Every sensation begins with the conversion of a sensory stimulus into the response of a receptor neuron. Typically, this involves a sequence of multiple biophysical processes that cannot all be monitored directly. In this work, we present an approach that is based on analyzing different stimuli that cause the same final output, here defined as the probability of the receptor neuron to fire a single action potential. Comparing such iso-response stimuli within the framework of nonlinear cascade models allows us to extract the characteristics of individual signal-processing steps with a temporal resolution much finer than the trial-to-trial variability of the measured output spike times. Applied to insect auditory receptor cells, the technique reveals the sub-millisecond dynamics of the eardrum vibration and of the electrical potential and yields a quantitative four-step cascade model. The model accounts for the tuning properties of this class of neurons and explains their high temporal resolution under natural stimulation. Owing to its simplicity and generality, the presented method is readily applicable to other nonlinear cascades and a large variety of signal-processing systems.
Full Text Available In this research, anatomical descriptions of the structure of the temporal bone and auditory ossicles have been performed based on dissection of ten guinea pigs. The results showed that, in guinea pig temporal bone was similar to other animals and had three parts; squamous, tympanic and petrous .The tympanic part was much better developed and consisted of oval shaped tympanic bulla with many recesses in tympanic cavity. The auditory ossicles of guinea pig concluded of three small bones; malleus, incus and stapes but the head of the malleus and the body of incus were fused and forming a malleoincudal complex. The average of morphometric parameters showed that the malleus was 3.53 ± 0.22 mm in total length. In addition to head and handle, the malleus had two distinct process; lateral and muscular. The incus had a total length 1.23 ± 0.02mm. It had long and short crus although the long crus was developed better than short crus. The lenticular bone was a round bone that articulated with the long crus of incus. The stapes had a total length 1.38 ± 0.04mm. The anterior crus(0.86 ± 0.08mm was larger than posterior crus (0.76 ± 0.08mm. It is concluded that, in the guinea pig, the malleus and the incus are fused, forming a junction called incus-malleus, while in the other animals these are separate bones. The stapes is larger and has a triangular shape and the anterior and posterior crus are thicker than other rodents. Therefore, for otological studies, the guinea pig is a good lab animal.
Rybalko, Natalia; Šuta, Daniel; Popelář, Jiří; Syka, Josef
Roč. 209, č. 1 (2010), s. 123-130. ISSN 0166-4328 R&D Projects: GA ČR GA309/07/1336; GA MŠk(CZ) LC554 Institutional research plan: CEZ:AV0Z50390512 Keywords : auditory cortex * temporal discrimination * hemispheric lateralization Subject RIV: FH - Neurology Impact factor: 3.393, year: 2010
Zarchi, Omer; Avni, Chen; Attias, Josef; Frisch, Amos; Carmel, Miri; Michaelovsky, Elena; Green, Tamar; Weizman, Abraham; Gothelf, Doron
The neurophysiologic aberrations underlying the auditory hypersensitivity in Williams syndrome (WS) are not well defined. The P1-N1-P2 obligatory complex and mismatch negativity (MMN) response were investigated in 18 participants with WS, and the results were compared with those of 18 age- and gender-matched typically developing (TD) controls. Results revealed significantly higher amplitudes of both the P1-N1-P2 obligatory complex and the MMN response in the WS participants than in the TD controls. The P1-N1-P2 complex showed an age-dependent reduction in the TD but not in the WS participants. Moreover, high P1-N1-P2 complex was associated with low verbal comprehension scores in WS. This investigation demonstrates that central auditory processing is hyperactive in WS. The increase in auditory brain responses of both the obligatory complex and MMN response suggests aberrant processes of auditory encoding and discrimination in WS. Results also imply that auditory processing may be subjected to a delayed or diverse maturation and may affect the development of high cognitive functioning in WS. PMID:25603839
Christensen-Dalsgaard, Jakob; Tang, Ye Zhong; Carr, Catherine E
Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (around 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted...... from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al., 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions...
Hertz, Uri; Amedi, Amir
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required. PMID:24518756
Caroline Nunes Rocha-Muniz
Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.
Imaizumi, Kazuo; Priebe, Nicholas J.; Sharpee, Tatyana O.; Cheung, Steven W.; Schreiner, Christoph E.
A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1) the event-locked spike-timing precision, 2) the mean firing rate, and 3) the interspike interval (...
Fey, Marc E.; Richard, Gail J.; Geffner, Donna; Kamhi, Alan G.; Medwetsky, Larry; Paul, Diane; Ross-Swain, Deborah; Wallach, Geraldine P.; Frymark, Tobi; Schooling, Tracy
Purpose: In this systematic review, the peer-reviewed literature on the efficacy of interventions for school-age children with auditory processing disorder (APD) is critically evaluated. Method: Searches of 28 electronic databases yielded 25 studies for analysis. These studies were categorized by research phase (e.g., exploratory, efficacy) and…
Niels Chr.Hansen; MarcusT.Pearce
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic e...
Full Text Available Humans and a few select insect and reptile species synchronise inter-individual behaviour without any time lag by predicting the time of future events rather than reacting to them. This is evident in music performance, dance, and drill. Although repetition of equal time intervals (i.e. isochrony is the central principle for such prediction, this simple information is used in a flexible and complex way that accommodates both multiples, subdivisions, and gradual changes of intervals. The scope of this flexibility remains largely uncharted, and the underlying mechanisms are a matter for speculation. Here I report an auditory illusion that highlights some aspects of this behaviour and that provides a powerful tool for its future study. A sound pattern is described that affords multiple alternative and concurrent rates of recurrence (temporal levels. An algorithm that systematically controls time intervals and the relative loudness among these levels creates an illusion that the perceived rate speeds up or slows down infinitely. Human participants synchronised hand movements with their perceived rate of events, and exhibited a change in their movement rate that was several times larger than the physical change in the sound pattern. The illusion demonstrates the duality between the external signal and the internal predictive process, such that people's tendency to follow their own subjective pulse overrides the overall properties of the stimulus pattern. Furthermore, accurate synchronisation with sounds separated by more than 8 s demonstrate that multiple temporal levels are employed for facilitating temporal organisation and integration by the human brain. A number of applications of the illusion and the stimulus pattern are suggested.
Sidiropoulos, Kyriakos; Ackermann, Hermann; Wannke, Michael; Hertrich, Ingo
This study investigates the temporal resolution capacities of the central-auditory system in a subject (NP) suffering from repetition conduction aphasia. More specifically, the patient was asked to detect brief gaps between two stretches of broadband noise (gap detection task) and to evaluate the duration of two biphasic (WN-3) continuous noise…
O'Sullivan, James A; Shamma, Shihab A; Lalor, Edmund C
The human brain has evolved to operate effectively in highly complex acoustic environments, segregating multiple sound sources into perceptually distinct auditory objects. A recent theory seeks to explain this ability by arguing that stream segregation occurs primarily due to the temporal coherence of the neural populations that encode the various features of an individual acoustic source. This theory has received support from both psychoacoustic and functional magnetic resonance imaging (fMRI) studies that use stimuli which model complex acoustic environments. Termed stochastic figure-ground (SFG) stimuli, they are composed of a "figure" and background that overlap in spectrotemporal space, such that the only way to segregate the figure is by computing the coherence of its frequency components over time. Here, we extend these psychoacoustic and fMRI findings by using the greater temporal resolution of electroencephalography to investigate the neural computation of temporal coherence. We present subjects with modified SFG stimuli wherein the temporal coherence of the figure is modulated stochastically over time, which allows us to use linear regression methods to extract a signature of the neural processing of this temporal coherence. We do this under both active and passive listening conditions. Our findings show an early effect of coherence during passive listening, lasting from ∼115 to 185 ms post-stimulus. When subjects are actively listening to the stimuli, these responses are larger and last longer, up to ∼265 ms. These findings provide evidence for early and preattentive neural computations of temporal coherence that are enhanced by active analysis of an auditory scene. PMID:25948273
Balcı, Fuat; Simen, Patrick
The processing dynamics underlying temporal decisions and the response times they generate have received little attention in the study of interval timing. In contrast, models of other simple forms of decision making have been extensively investigated using response times, leading to a substantial disconnect between temporal and non-temporal decision theories. An overarching decision-theoretic framework that encompasses existing, non-temporal decision models may, however, account both for interval timing itself and for time-based decision-making. We sought evidence for this framework in the temporal discrimination performance of humans tested on the temporal bisection task. In this task, participants retrospectively categorized experienced stimulus durations as short or long based on their perceived similarity to two, remembered reference durations and were rewarded only for correct categorization of these references. Our analysis of choice proportions and response times suggests that a two-stage, sequential diffusion process, parameterized to maximize earned rewards, can account for salient patterns of bisection performance. The first diffusion stage times intervals by accumulating an endogenously noisy clock signal; the second stage makes decisions about the first-stage temporal representation by accumulating first-stage evidence corrupted by endogenous noise. Reward-maximization requires that the second-stage accumulation rate and starting point be based on the state of the first-stage timer at the end of the stimulus duration, and that estimates of non-decision-related delays should decrease as a function of stimulus duration. Results are in accord with these predictions and thus support an extension of the drift-diffusion model of static decision making to the domain of interval timing and temporal decisions. PMID:24726447
Full Text Available The objective of the work was to assess occurrence of central auditory processing disorders in children with dyslalia. Material and method. The material included 30 children at the age 798 years old being under long-term speech therapy care due to articulation disorders. All the children were subjected to the phoniatric and speech examination, including tonal and impedance audiometry, speech therapist's consultation and psychologist's consultation. Electrophysi-ological (N2, P2, N2, P2, P300 record and following psychoacoustic test of central auditory functions were performed (Frequency Pattern Test. Results. Analysis of the results revealed disorders in the process of sound analysis within frequency and P300 wave latency prolongation in children with dyslalia. Conclusions. Auditory processing disorders may be significant in development of correct articulation in children, they also may explain unsatisfactory results of long-term speech therapy
Full Text Available Assemblies of vertically connected neurons in the cerebral cortex form information processing units (columns that participate in the distribution and segregation of sensory signals. Despite well-accepted models of columnar architecture, functional mechanisms of inter-laminar communication remain poorly understood. Hence, the purpose of the present investigation was to examine the effects of sensory information features on columnar response properties. Using acute recording techniques, extracellular response activity was collected from the right hemisphere of eight mature cats (felis catus. Recordings were conducted with multichannel electrodes that permitted the simultaneous acquisition of neuronal activity within primary auditory cortex columns. Neuronal responses to simple (pure tones, complex (noise burst and frequency modulated sweeps, and ecologically relevant (con-specific vocalizations acoustic signals were measured. Collectively, the present investigation demonstrates that despite consistencies in neuronal tuning (characteristic frequency, irregularities in discharge activity between neurons of individual A1 columns increase as a function of spectral (signal complexity and temporal (duration acoustic variations.
Georgiou, George K.; Papadopoulos, Timothy C.; Zarouna, Elena; Parrila, Rauno
The purpose of this study was to examine if children with dyslexia learning to read a consistent orthography (Greek) experience auditory and visual processing deficits and if these deficits are associated with phonological awareness, rapid naming speed and orthographic processing. We administered measures of general cognitive ability, phonological…
LaPointe, Leonard L.; Heald, Gary R.; Stierwalt, Julie A. G.; Kemker, Brett E.; Maurice, Trisha
Objective: The effects of interference, competition, and distraction on cognitive processing are unclearly understood, particularly regarding type and intensity of auditory distraction across a variety of cognitive processing tasks. Method: The purpose of this investigation was to report two experiments that sought to explore the effects of types…
Heimrath, Kai; Fiene, Marina; Rufener, Katharina S; Zaehle, Tino
Transcranial electrical stimulation (tES) has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation, transcranial alternating current stimulation, and transcranial random noise stimulation has emerged. While the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders. PMID:27013969
Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc
Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323
Bicak, Mehmet M. A.
Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.
Niels Chr. eHansen
Full Text Available Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty - a property of listeners’ prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure.Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex. Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty. We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty. Finally, we simulate listeners’ perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature.The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Hansen, Niels Chr; Pearce, Marcus T
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music. PMID:25295018
Carroll, Christine A.; Boggs, Jennifer; O'Donnell, Brian F.; Shekhar, Anantha; Hetrick, William P.
Schizophrenia may be associated with a fundamental disturbance in the temporal coordination of information processing in the brain, leading to classic symptoms of schizophrenia such as thought disorder and disorganized and contextually inappropriate behavior. Despite the growing interest and centrality of time-dependent conceptualizations of the…
Angenstein, Nicole; Stadler, Jörg; Brechmann, André
Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise. PMID:26778471
Barghathi, Hatem; Vojta, Thomas; Hoyos, José A.
We investigate the influence of time-varying environmental noise, i.e., temporal disorder, on the nonequilibrium phase transition of the contact process. Combining a real-time renormalization group, scaling theory, and large scale Monte-Carlo simulations in one and two dimensions, we show that the temporal disorder gives rise to an exotic critical point. At criticality, the effective noise amplitude diverges with increasing time scale, and the probability distribution of the density becomes infinitely broad, even on a logarithmic scale. Moreover, the average density and survival probability decay only logarithmically with time. This infinite-noise critical behavior can be understood as the temporal counterpart of infinite-randomness critical behavior in spatially disordered systems, but with exchanged roles of space and time. We also analyze the generality of our results, and we discuss potential experiments.
Samson, Fabienne; Mottron, Laurent; Jemel, Boutheina; Belin, Pascal; Ciocca, Valter
To test the hypothesis that level of neural complexity explain the relative level of performance and brain activity in autistic individuals, available behavioural, ERP and imaging findings related to the perception of increasingly complex auditory material under various processing tasks in autism were reviewed. Tasks involving simple material…
Grube, Manon; Bruffaerts, Rose; Schaeverbeke, Jolien; Neyens, Veerle; De Weer, An-Sofie; Seghers, Alexandra; Bergmans, Bruno; Dries, Eva; Griffiths, Timothy D.
The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using non-linguistic stimuli devoid of meaning we assessed three key domains of auditory processing (pitch, timing and timbre) in a consecutive series of 18 patients with primary progressive aphasia (eight with semantic variant, six with non-fluent/agrammatic variant, and four with logopenic variant), as well as 28 age-matched healthy controls. We further examined whether performance on the psychoacoustic tasks in the three domains related to the patients’ speech and language and neuropsychological profile. At the group level, patients were significantly impaired in the three domains. Patients had the most marked deficits within the rhythm domain for the processing of short sequences of up to seven tones. Patients with the non-fluent variant showed the most pronounced deficits at the group and the individual level. A subset of patients with the semantic variant were also impaired, though less severely. The patients with the logopenic variant did not show any significant impairments. Significant deficits in the non-fluent and the semantic variant remained after partialling out effects of executive dysfunction. Performance on a subset of the psychoacoustic tests correlated with conventional verbal repetition tests. In sum, a core central auditory impairment exists in primary progressive aphasia for non-linguistic stimuli. While the non-fluent variant is clinically characterized by a motor speech deficit (output problem), perceptual processing of tone sequences is clearly deficient. This may indicate the co-occurrence in the non-fluent variant of a deficit in working memory for auditory objects. Parsimoniously we propose that auditory timing pathways are altered, which are used in common for processing acoustic sequence structure in both speech output and acoustic input. PMID:27060523
Grube, Manon; Bruffaerts, Rose; Schaeverbeke, Jolien; Neyens, Veerle; De Weer, An-Sofie; Seghers, Alexandra; Bergmans, Bruno; Dries, Eva; Griffiths, Timothy D; Vandenberghe, Rik
The extent to which non-linguistic auditory processing deficits may contribute to the phenomenology of primary progressive aphasia is not established. Using non-linguistic stimuli devoid of meaning we assessed three key domains of auditory processing (pitch, timing and timbre) in a consecutive series of 18 patients with primary progressive aphasia (eight with semantic variant, six with non-fluent/agrammatic variant, and four with logopenic variant), as well as 28 age-matched healthy controls. We further examined whether performance on the psychoacoustic tasks in the three domains related to the patients' speech and language and neuropsychological profile. At the group level, patients were significantly impaired in the three domains. Patients had the most marked deficits within the rhythm domain for the processing of short sequences of up to seven tones. Patients with the non-fluent variant showed the most pronounced deficits at the group and the individual level. A subset of patients with the semantic variant were also impaired, though less severely. The patients with the logopenic variant did not show any significant impairments. Significant deficits in the non-fluent and the semantic variant remained after partialling out effects of executive dysfunction. Performance on a subset of the psychoacoustic tests correlated with conventional verbal repetition tests. In sum, a core central auditory impairment exists in primary progressive aphasia for non-linguistic stimuli. While the non-fluent variant is clinically characterized by a motor speech deficit (output problem), perceptual processing of tone sequences is clearly deficient. This may indicate the co-occurrence in the non-fluent variant of a deficit in working memory for auditory objects. Parsimoniously we propose that auditory timing pathways are altered, which are used in common for processing acoustic sequence structure in both speech output and acoustic input. PMID:27060523
Miller, Michael L; Gallup, Andrew C; Vogel, Andrea R; Clark, Anne B
Yawning may serve both social and nonsocial functions. When budgerigars (Melopsittacus undulatus) are briefly held, simulating capture by a predator, the temporal pattern of yawning changes. When this species is observed in a naturalistic setting (undisturbed flock), yawning and also stretching, a related behavior, are mildly contagious. On the basis of these findings, we hypothesized that a stressful event would be followed by the clustering of these behaviors in a group of birds, which may be facilitated both by a standard pattern of responding to a startling stressor and also contagion. In this study, we measured yawning and stretching in 4-bird groups following a nonspecific stressor (loud white noise) for a period of 1 hr, determining whether auditory disturbances alter the timing and frequency of these behaviors. Our results show that stretching, and to a lesser degree yawning, were nonrandomly clumped in time following the auditory disturbances, indicating that the temporal clustering is sensitive to, and enhanced by, environmental stressors while in small groups. No decrease in yawning such as found after handling stress was observed immediately after the loud noise but a similar increase in yawning 20 min after was observed. Future research is required to tease apart the roles of behavioral contagion and a time-setting effect following a startle in this species. This research is of interest because of the potential role that temporal clumping of yawning and stretching could play in both the collective detection of, and response to, local disturbances or predation threats. PMID:22268553
Fernanda Acaui Ribeiro Burguetti
Full Text Available O processamento da informação sonora depende da integridade das vias auditivas aferentes e eferentes. O sistema auditivo eferente pode ser avaliado por meio dos reflexos acústicos e da supressão das emissões otoacústicas. OBJETIVO: Verificar a atividade do sistema auditivo eferente, por meio da supressão das emissões otoacústicas (EOA e da sensibilização do reflexo acústico no distúrbio de processamento auditivo. CASUÍSTICA E MÉTODO: Estudo prospectivo: 50 crianças com alteração de processamento auditivo (grupo estudo e 38 sem esta alteração (grupo controle, avaliadas por meio das EOA na ausência e presença de ruído contralateral e da pesquisa dos limiares do reflexo acústico na ausência e presença de estímulo facilitador contralateral. RESULTADOS: O valor médio da supressão das EOA foi de até 1,50 dB para o grupo controle e de até 1,26 dB para o grupo estudo. O valor médio da sensibilização dos reflexos foi de até 14,60 dB para o grupo estudo e de até 15,21 dB para o grupo controle. Não houve diferença estatisticamente significante entre as respostas dos grupos controle e estudo em ambos os procedimentos. CONCLUSÃO: O grupo estudo apresentou valores reduzidos na supressão das EOA e valores aumentados na sensibilização do reflexo acústico, em relação ao grupo controle.Auditory processing depends on afferent and efferent auditory pathways integrity. The efferent auditory system may be assessed in humans by two non-invasive and objective methods: acoustic reflex and otoacoustic emissions suppression. AIM: Analyze the efferent auditory system activity by otoacoustic emission suppression and acoustic reflex sensitization in human subjects with auditory processing disorders. METHOD: Prospective study: fifty children with auditory processing disorders (study group and thirty-eight children without auditory processing disorders (control group were evaluated using otoacoustic emission with and without
Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.
Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.
Favrot, Sylvain Emmanuel; Buchholz, Jörg
reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution....... Throughout the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....
Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519
Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether a...
Wlodarczyk £.; Szkielkowska A.; Ratynska J.; Skarzynski P. H.; Skarzynski H.; Ganc M.; Pilka A.; Obrycka A.
The objective of the work was to assess occurrence of central auditory processing disorders in children with dyslalia. Material and method. The material included 30 children at the age 798 years old being under long-term speech therapy care due to articulation disorders. All the children were subjected to the phoniatric and speech examination, including tonal and impedance audiometry, speech therapist's consultation and psychologist's consultation. Electrophysi-ological (N2, P2, N2,...
Liberalesso Paulo Breno; D’Andrea Karlin Fabianne; Cordeiro Mara L; Zeigelboim Bianca; Marques Jair; Jurkiewicz Ari
AbstractBackgroundSleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP). Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse informat...
Robert J Ellis
Full Text Available "Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy" in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training, exercise (e.g., jogging, or entertainment (e.g., continuous dance mixes. Although several such algorithms return simple point estimates of an audio file's temporal structure (e.g., "average tempo", "time signature", none has sought to quantify the temporal stability of a series of detected beats. Such a method--a "Balanced Evaluation of Auditory Temporal Stability" (BEATS--is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files. A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications.
Ellis, Robert J.; Duan, Zhiyan; Wang, Ye
“Moving to the beat” is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical “energy”) in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). Although several such algorithms return simple point estimates of an audio file’s temporal structure (e.g., “average tempo”, “time signature”), none has sought to quantify the temporal stability of a series of detected beats. Such a method-a “Balanced Evaluation of Auditory Temporal Stability” (BEATS)–is proposed here, and is illustrated using the Million Song Dataset (a collection of audio features and music metadata for nearly one million audio files). A publically accessible web interface is also presented, which combines the thresholdable statistics of BEATS with queryable metadata terms, fostering potential avenues of research and facilitating the creation of highly personalized music playlists for clinical or recreational applications. PMID:25469636
Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.
Tang, Yong; Tang, Na
Presenting a systematic introduction to temporal model and time calculation, this volume explores temporal information processing technology and its applications. Topics include the time model in terms of calculus and logic, temporal data models and database concepts, temporal query language, and more.
Song, Zhe; Kusiak, Andrew
This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework. PMID:19900853
David E Jenson; Bowers, Andrew L.
Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. ...
Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dors...
Behroozmand, Roozbeh; Sangtian, Stacey; Korzyukov, Oleg; Larson, Charles R
The predictive coding model suggests that voice motor control is regulated by a process in which the mismatch (error) between feedforward predictions and sensory feedback is detected and used to correct vocal motor behavior. In this study, we investigated how predictions about timing of pitch perturbations in voice auditory feedback would modulate ERP and behavioral responses during vocal production. We designed six counterbalanced blocks in which a +100cents pitch-shift stimulus perturbed voice auditory feedback during vowel sound vocalizations. In three blocks, there was a fixed delay (500, 750 or 1000ms) between voice and pitch-shift stimulus onset (predictable), whereas in the other three blocks, stimulus onset delay was randomized between 500, 750 and 1000ms (unpredictable). We found that subjects produced compensatory (opposing) vocal responses that started at 80ms after the onset of the unpredictable stimuli. However, for predictable stimuli, subjects initiated vocal responses at 20ms before and followed the direction of pitch shifts in voice feedback. Analysis of ERPs showed that the amplitudes of the N1 and P2 components were significantly reduced in response to predictable compared with unpredictable stimuli. These findings indicate that predictions about temporal features of sensory feedback can modulate vocal motor behavior. In the context of the predictive coding model, temporally-predictable stimuli are learned and reinforced by the internal feedforward system, and as indexed by the ERP suppression, the sensory feedback contribution is reduced for their processing. These findings provide new insights into the neural mechanisms of vocal production and motor control. PMID:26835556
Caroline Nunes Rocha-Muniz; Débora Maria Befi-Lopes; Eliane Schochat
INTRODUCTION: Mismatch negativity, an electrophysiological measure, evaluates the brain's capacity to discriminate sounds, regardless of attentional and behavioral capacity. Thus, this auditory event-related potential is promising in the study of the neurophysiological basis underlying auditory processing.OBJECTIVE: To investigate complex acoustic signals (speech) encoded in the auditory nervous system of children with specific language impairment and compare with children with auditory proce...
Iliadou, V; Iakovides, S
Background Psychoacoustics is a fascinating developing field concerned with the evaluation of the hearing sensation as an outcome of a sound or speech stimulus. Neuroaudiology with electrophysiologic testing, records the electrical activity of the auditory pathways, extending from the 8th cranial nerve up to the cortical auditory centers as a result of external auditory stimuli. Central Auditory Processing Disorders may co-exist with mental disorders and complicate diagnosis and outcome. Desi...
Daassi, Chaouki; Nigay, Laurence; Fauvet, Marie-Christine
International audience Temporal data are abundantly present in many application domains such as banking, financial, clinical, geographical applications and so on. Temporal data have been extensively studied from data mining and database perspectives. Complementary to these studies, our work focuses on the visualization techniques of temporal data: a wide range of visualization techniques have been designed to assist the users to visually analyze and manipulate temporal data. All the techni...
David E Jenson
Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Hames, Elizabeth' C; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C; Baker, Mary; Zupancic, Stephen; O'Boyle, Michael; Richman, David
Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020
Hames, Elizabeth’ C.; Murphy, Brandi; Rajmohan, Ravi; Anderson, Ronald C.; Baker, Mary; Zupancic, Stephen; O’Boyle, Michael; Richman, David
Electroencephalography (EEG) and blood oxygen level dependent functional magnetic resonance imagining (BOLD fMRI) assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD) and 10 neurotypical (NT) controls between the ages of 20–28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block vs. the second presentation of a visual stimulus in an all visual block (AA2-VV2).We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs. PMID:27148020
Terry, J; Stevens, C J; Weidemann, G; Tillmann, B
Implicit learning of temporal structure has primarily been reported when events within a sequence (e.g., visual-spatial locations, tones) are systematically ordered and correlated with the temporal structure. An auditory serial reaction time task was used to investigate implicit learning of temporal intervals between pseudorandomly ordered syllables. Over exposure, participants identified syllables presented in sequences with weakly metrical temporal structures. In a test block, the temporal structure differed from exposure only in the duration of the interonset intervals (IOIs) between groups. It was hypothesized that reaction time (RT) to syllables following between-group IOIs would decrease with exposure and increase at test. In Experiments 1 and 2, the sequences presented over exposure and test were counterbalanced across participants (Pattern 1 and Pattern 2 conditions). An RT increase at test to syllables following between-group IOIs was only evident in the condition that presented an exposure structure with a slightly stronger meter (Pattern 1 condition). The Pattern 1 condition also elicited a global expectancy effect: Test block RT slowed to earlier-than-expected syllables (i.e., syllables shifted to an earlier beat) but not to later-than-expected syllables. Learning of between-group IOIs and the global expectancy effect extended to the Pattern 2 condition when meter was strengthened with an external pulse (Experiment 2). Experiment 3 further demonstrated implicit learning of a new weakly metrical structure with only earlier-than-expected violations at test. Overall findings demonstrate learning of weakly metrical rhythms without correlated event structures (i.e., sequential syllable orders). They further suggest the presence of a global expectancy effect mediated by metrical strength. PMID:27301354
Stephen B.R.E. Brown
This dissertation explores the involvement of the locus-coeruleus-noradrenaline (LC-NE) system in both temporal attention and uncertainty processing. To this end, a number of cognitive tasks are used (Stroop, passive viewing, attentional blink, accessory stimulus, auditory oddball) and a number of techniques are utilized (electroencephalogram [EEG], pupillometry, phsychopharmacology).
Benasich, April A.; Thomas, Jennifer J.; Choudhury, Naseem; Leppänen, Paavo H. T.
The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. Likewise, deficits in rapid auditory processing of both verbal and nonverbal stimuli are characteristic of individuals with developmental language disorders such as Specific Language Impairment. Auditory processing abilities are well developed in infancy, and thus such deficits should be detectable in infants. In the studies presented here, converging met...
Mirian Aratangy Arnaut
Full Text Available Contemporary cross-sectional cohort study. There is evidence of the auditory perception influence on the development of oral and written language, as well as on the self-perception of vocal conditions. The auditory system maturation can impact on this process. OBJECTIVE: To characterize the auditory skills of temporal ordering and localization in dysphonic children. MATERIALS AND METHODS: We assessed 42 children (4 to 8 years. Study group: 31 dysphonic children; Comparison group: 11 children without vocal change complaints. They all had normal auditory thresholds and also normal cochleo-eyelid reflexes. They were submitted to a Simplified assessment of the auditory process (Pereira, 1993. In order to compare the groups, we used the Mann-Whitney and Kruskal-Wallis statistical tests. Level of significance: 0.05 (5%. RESULTS: Upon simplified assessment, 100% of the Control Group and 61.29% of the Study Group had normal results. The groups were similar in the localization and verbal sequential memory tests. The nonverbal sequential memory showed worse results on dysphonic children. In this group, the performance was worse among the four to six years. CONCLUSION: The dysphonic children showed changes on the localization or temporal ordering skills, the skill of non-verbal temporal ordering differentiated the dysphonic group. In this group, the Sound Location improved with age.Estudo de coorte contemporânea com corte transversal. Há evidências da influência da percepção auditiva sobre o desenvolvimento da linguagem oral e escrita e da autopercepção das condições vocais. A maturação do sistema auditivo pode interferir nesse processo. OBJETIVO: Caracterizar habilidades auditivas de Localização e de Ordenação Temporal em crianças disfônicas. MATERIAL E MÉTODO: Avaliaram-se 42 crianças (4 a 8 anos. Grupo Pesquisa: 31 crianças disfônicas, Grupo de Comparação: 11 crianças sem queixas de alterações vocais. Todas apresentaram
Ouimet, Tialee; Balaban, Evan
Reading impairments have previously been associated with auditory processing differences. We examined "auditory stream biasing", a global aspect of auditory temporal processing. Children with reading impairments, control children and adults heard a 10 s long stream-bias-inducing sound sequence (a repeating 1000 Hz tone) and a test sequence (eight…
Anthony J. Rissling
Full Text Available Although sensory processing abnormalities contribute to widespread cognitive and psychosocial impairments in schizophrenia (SZ patients, scalp-channel measures of averaged event-related potentials (ERPs mix contributions from distinct cortical source-area generators, diluting the functional relevance of channel-based ERP measures. SZ patients (n = 42 and non-psychiatric comparison subjects (n = 47 participated in a passive auditory duration oddball paradigm, eliciting a triphasic (Deviant−Standard tone ERP difference complex, here termed the auditory deviance response (ADR, comprised of a mid-frontal mismatch negativity (MMN, P3a positivity, and re-orienting negativity (RON peak sequence. To identify its cortical sources and to assess possible relationships between their response contributions and clinical SZ measures, we applied independent component analysis to the continuous 68-channel EEG data and clustered the resulting independent components (ICs across subjects on spectral, ERP, and topographic similarities. Six IC clusters centered in right superior temporal, right inferior frontal, ventral mid-cingulate, anterior cingulate, medial orbitofrontal, and dorsal mid-cingulate cortex each made triphasic response contributions. Although correlations between measures of SZ clinical, cognitive, and psychosocial functioning and standard (Fz scalp-channel ADR peak measures were weak or absent, for at least four IC clusters one or more significant correlations emerged. In particular, differences in MMN peak amplitude in the right superior temporal IC cluster accounted for 48% of the variance in SZ-subject performance on tasks necessary for real-world functioning and medial orbitofrontal cluster P3a amplitude accounted for 40%/54% of SZ-subject variance in positive/negative symptoms. Thus, source-resolved auditory deviance response measures including MMN may be highly sensitive to SZ clinical, cognitive, and functional characteristics.
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH,...
Marshall, Catherine M.; Snowling, Margaret J.; Bailey, Peter J.
Two studies explored the relationship between rapid auditory processing and phonological processing in 82 typical children and compared 17 children with dyslexia to controls. Children with dyslexia performed at a level similar to reading-age controls on auditory processing but obtained scores that were significantly below those of the…
Jones, S J; Vaz Pato, M; Sprague, L; Stokes, M; Munday, R; Haque, N
In order to assess higher auditory processing capabilities, long-latency auditory evoked potentials (AEPs) were recorded to synthesized musical instrument tones in 22 post-comatose patients with severe brain injury causing variably attenuated behavioural responsiveness. On the basis of normative studies, three different types of spectro-temporal modulation were employed. When a continuous 'clarinet' tone changes pitch once every few seconds, N1/P2 potentials are evoked at latencies of approximately 90 and 180 ms, respectively. Their distribution in the fronto-central region is consistent with generators in the supratemporal cortex of both hemispheres. When the pitch is modulated at a much faster rate ( approximately 16 changes/s), responses to each change are virtually abolished but potentials with similar distribution are still elicited by changing the timbre (e.g. 'clarinet' to 'oboe') every few seconds. These responses appear to represent the cortical processes concerned with spectral pattern analysis and the grouping of frequency components to form sound 'objects'. Following a period of 16/s oscillation between two pitches, a more anteriorly distributed negativity is evoked on resumption of a steady pitch. Various lines of evidence suggest that this is probably equivalent to the 'mismatch negativity' (MMN), reflecting a pre-perceptual, memory-based process for detection of change in spectro-temporal sound patterns. This method requires no off-line subtraction of AEPs evoked by the onset of a tone, and the MMN is produced rapidly and robustly with considerably larger amplitude (usually >5 microV) than that to discontinuous pure tones. In the brain-injured patients, the presence of AEPs to two or more complex tone stimuli (in the combined assessment of two authors who were 'blind' to the clinical and behavioural data) was significantly associated with the demonstrable possession of discriminative hearing (the ability to respond differentially to verbal commands
of two such cues on speech intelligibility was studied. First, the benefit from early reflections (ER’s) in a room was determined using a virtual auditory environment. ER’s were found to be useful for speech intelligibility, but to a smaller extent than the direct sound (DS). The benefit was...... intelligibility, the exact ILD information is not crucial. The results from an additional experiment demonstrated that the ER benefit was maintained with independent as well as with linked hearing aid compression. Overall, this work contributes to the understanding of ER processing in listeners with normal and...... quantified with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not...
Bakos, Sarolta; Töllner, Thomas; Trinkl, Monika; Landes, Iris; Bartling, Jürgen; Grossheinrich, Nicola; Schulte-Körne, Gerd; Greimel, Ellen
To date, little is known about sex differences in the neurophysiological correlates underlying auditory information processing. In the present study, auditory evoked potentials were evoked in typically developing male (n = 15) and female (n = 14) adolescents (13-18 years) during an auditory oddball task. Girls compared to boys displayed lower N100 and P300 amplitudes to targets. Larger N100 amplitudes in adolescent boys might indicate higher neural sensitivity to changes of incoming auditory information. The P300 findings point toward sex differences in auditory working memory and might suggest that adolescent boys might allocate more attentional resources when processing relevant auditory stimuli than adolescent girls. PMID:27379950
Buchholz, Jörg; Kerketsos, P
When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...... filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high...
Maeda, Yukihide; Nakagawa, Atsuko; Nagayasu, Rie; Sugaya, Akiko; Omichi, Ryotaro; Kariya, Shin; Fukushima, Kunihiro; Nishizaki, Kazunori
Central auditory processing disorder (CAPD) is a condition in which dysfunction in the central auditory system causes difficulty in listening to conversations, particularly under noisy conditions, despite normal peripheral auditory function. Central auditory testing is generally performed in patients with normal hearing on the pure tone audiogram (PTA). This report shows that diagnosis of CAPD is possible even in the presence of an elevated threshold on the PTA, provided that the normal function of the peripheral auditory pathway was verified by distortion product otoacoustic emission (DPOAE), auditory brainstem response (ABR), and auditory steady state response (ASSR). Three pediatric cases (9- and 10-year-old girls and an 8-year-old boy) of CAPD with elevated thresholds on PTAs are presented. The chief complaint was difficulty in listening to conversations. PTA showed elevated thresholds, but the responses and thresholds for DPOAE, ABR, and ASSR were normal, showing that peripheral auditory function was normal. Significant findings of central auditory testing such as dichotic speech tests, time compression of speech signals, and binaural interaction tests confirmed the diagnosis of CAPD. These threshold shifts in PTA may provide a new concept of a clinical symptom due to central auditory dysfunction in CAPD. PMID:26922127
Berns, Gregory S.; Cook, Peter F.; Foxley, Sean; Jbabdi, Saad; Miller, Karla L.; Marino, Lori
The brains of odontocetes (toothed whales) look grossly different from their terrestrial relatives. Because of their adaptation to the aquatic environment and their reliance on echolocation, the odontocetes' auditory system is both unique and crucial to their survival. Yet, scant data exist about the functional organization of the cetacean auditory system. A predominant hypothesis is that the primary auditory cortex lies in the suprasylvian gyrus along the vertex of the hemispheres, with this...
Speech perception (SP), verbal working memory (WM) and auditory temporal resolution (ATR) have been studied in children with attention deficit hyperactivity disorder (ADHD) and language impairment (LI), as well as in reference groups of typically developed children. A computerised method was developed, in which discrimination of same or different pairs of stimuli was tested. In a functional Magnetic Resonance Imaging (fMRI) study a similar test was used to explore the neural...
Neijenhuis, C.A.M.; Beynon, A.J.; Snik, A.F.M.; Engelen, B.G.M. van; Broek, P. van den
HYPOTHESIS: It is unclear whether Charcot-Marie-Tooth (CMT) disease, type 1A, causes auditory processing disorders. Therefore, auditory processing abilities were investigated in five CMT1A patients with normal hearing. BACKGROUND: Previous studies have failed to separate peripheral from central audi
Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)
Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)
Moran, Landhing M.; Booze, Rosemarie M.; Mactutus, Charles F.
HIV-1-associated neurocognitive disorders (HAND) afflict up to 50% of HIV-1-positive individuals, despite the effectiveness of combination antiretroviral therapy (CART) in reducing the prevalence of more severe neurocognitive impairment. Alterations in brainstem auditory evoked potentials (BAEP), a measure of temporal processing, are one of the earliest neurological abnormalities of HIV-1-positive individuals. Prepulse inhibition (PPI) of the auditory startle response (ASR), a measure of sens...
Cristina F.B. Murphy
Full Text Available Research has demonstrated that a higher level of education is associated with better performance on cognitive tests among middle-aged and elderly people. However, the effects of education on auditory processing skills have not yet been evaluated. Previous demonstrations of sensory-cognitive interactions in the aging process indicate the potential importance of this topic. Therefore, the primary purpose of this study was to investigate the performance of middle-aged and elderly people with different levels of formal education on auditory processing tests. A total of 177 adults with no evidence of cognitive, psychological or neurological conditions took part in the research. The participants completed a series of auditory assessments, including dichotic digit, frequency pattern and speech-in-noise tests. A working memory test was also performed to investigate the extent to which auditory processing and cognitive performance were associated. The results demonstrated positive but weak correlations between years of schooling and performance on all of the tests applied. The factor years of schooling was also one of the best predictors of frequency pattern and speech-in-noise test performance. Additionally, performance on the working memory, frequency pattern and dichotic digit tests was also correlated, suggesting that the influence of educational level on auditory processing performance might be associated with the cognitive demand of the auditory processing tests rather than auditory sensory aspects itself. Longitudinal research is required to investigate the causal relationship between educational level and auditory processing skills.
Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.
Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.
Rinaldi, Luca; Lega, Carlotta; Cattaneo, Zaira; Girelli, Luisa; Bernardi, Nicolò Francesco
Growing evidence shows that individuals consistently match auditory pitch with visual size. For instance, high-pitched sounds are perceptually associated with smaller visual stimuli, whereas low-pitched sounds with larger ones. The present study explores whether this crossmodal correspondence, reported so far for perceptual processing, also modulates motor planning. To address this issue, we carried out a series of kinematic experiments to verify whether actions implying size processing are affected by auditory pitch. Experiment 1 showed that grasping movements toward small/large objects were initiated faster in response to high/low pitches, respectively, thus extending previous findings in the literature to more complex motor behavior. Importantly, auditory pitch influenced the relative scaling of the hand preshaping, with high pitches associated with smaller grip aperture compared with low pitches. Notably, no effect of auditory pitch was found in case of pointing movements (no grasp implied, Experiment 2), as well as when auditory pitch was irrelevant to the programming of the grip aperture, that is, in case of grasping an object of uniform size (Experiment 3). Finally, auditory pitch influenced also symbolic manual gestures expressing "small" and "large" concepts (Experiment 4). In sum, our results are novel in revealing the impact of auditory pitch on motor planning when size processing is required, and shed light on the role of auditory information in driving actions. (PsycINFO Database Record PMID:26280267
Hayrynen, Lauren K; Hamm, Jordan P; Sponheim, Scott R; Clementz, Brett A
Individuals with schizophrenia exhibit abnormalities in evoked brain responses in oddball paradigms. These could result from (a) insufficient salience-related cortical signaling (P300), (b) insufficient suppression of irrelevant aspects of the auditory environment, or (c) excessive neural noise. We tested whether disruption of ongoing auditory steady-state responses at predetermined frequencies informed which of these issues contribute to auditory stimulus relevance processing abnormalities in schizophrenia. Magnetoencephalography data were collected for 15 schizophrenia and 15 healthy subjects during an auditory oddball paradigm (25% targets; 1-s interstimulus interval). Auditory stimuli (pure tones: 1 kHz standards, 2 kHz targets) were administered during four continuous background (auditory steady-state) stimulation conditions: (1) no stimulation, (2) 24 Hz, (3) 40 Hz, and (4) 88 Hz. The modulation of the auditory steady-state response (aSSR) and the evoked responses to the transient stimuli were quantified and compared across groups. In comparison to healthy participants, the schizophrenia group showed greater disruption of the ongoing aSSR by targets regardless of steady-state frequency, and reduced amplitude of both M100 and M300 event-related field components. During the no-stimulation condition, schizophrenia patients showed accentuation of left hemisphere 40 Hz response to both standard and target stimuli, indicating an effort to enhance local stimulus processing. Together, these findings suggest abnormalities in auditory stimulus relevance processing in schizophrenia patients stem from insufficient amplification of salient stimuli. PMID:26933842
Charles F. Mactutus
Full Text Available One clue regarding the basis of cocaine-induced deficits in attentional processing is provided by the clinical findings of changes in the infants’ startle response; observations buttressed by neurophysiological evidence of alterations in brainstem transmission time. Using the IV route of administration and doses that mimic the peak arterial levels of cocaine use in humans, the present study examined the effects of prenatal cocaine on auditory information processing via tests of the acoustic startle response (ASR, habituation, and prepulse inhibition (PPI in the offspring. Nulliparous Long-Evans female rats, implanted with an IV access port prior to breeding, were administered saline, 0.5, 1.0, or 3.0 mg/kg/injection of cocaine HCL (COC from gestation day (GD8-20 (1x/day-GD8-14, 2x/day-GD15-20. COC had no significant effects on maternal/litter parameters or growth of the offspring. At 18-20 days of age, one male and one female, randomly selected from each litter displayed an increased ASR (>30% for males at 1.0 mg/kg and >30% for females at 3.0 mg/kg. When reassessed in adulthood (D90-100, a linear dose-response increase was noted on response amplitude. At both test ages, within-session habituation was retarded by prenatal cocaine treatment. Testing the females in diestrus vs. estrus did not alter the results. Prenatal cocaine altered the PPI response function across interstimulus interval (ISI and induced significant sex-dependent changes in response latency. Idazoxan, an alpha2-adrenergic receptor antagonist, significantly enhanced the ASR, but less enhancement was noted with increasing doses of prenatal cocaine. Thus, in utero exposure to cocaine, when delivered via a protocol designed to capture prominent features of recreational usage, causes persistent, if not permanent, alterations in auditory information processing, and suggests dysfunction of the central noradrenergic circuitry modulating, if not mediating, these responses.
Venkataraman, Yamini; Bartlett, Edward L.
The development of auditory temporal processing is important for processing complex sounds as well as for acquiring reading and language skills. Neuronal properties and sound processing change dramatically in auditory cortex neurons after the onset of hearing. However, the development of the auditory thalamus or medial geniculate body (MGB) has not been well studied over this critical time window. Since synaptic inhibition has been shown to be crucial for auditory temporal processing, this st...
Full Text Available The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorising computation over frequency channels, which are implemented in Brian Hears, a library for the spiking neural network simulator package Brian. This approach allows us to use high-level programming languages such as Python, as the cost of interpretation becomes negligible. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelised using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less ﬂexible, implementations.
Impey, Danielle; de la Salle, Sara; Knott, Verner
Transcranial direct current stimulation (tDCS) is a non-invasive form of brain stimulation which uses a very weak constant current to temporarily excite (anodal stimulation) or inhibit (cathodal stimulation) activity in the brain area of interest via small electrodes placed on the scalp. Currently, tDCS of the frontal cortex is being used as a tool to investigate cognition in healthy controls and to improve symptoms in neurological and psychiatric patients. tDCS has been found to facilitate cognitive performance on measures of attention, memory, and frontal-executive functions. Recently, a short session of anodal tDCS over the temporal lobe has been shown to increase auditory sensory processing as indexed by the Mismatch Negativity (MMN) event-related potential (ERP). This preliminary pilot study examined the separate and interacting effects of both anodal and cathodal tDCS on MMN-indexed auditory pitch discrimination. In a randomized, double blind design, the MMN was assessed before (baseline) and after tDCS (2mA, 20min) in 2 separate sessions, one involving 'sham' stimulation (the device is turned off), followed by anodal stimulation (to temporarily excite cortical activity locally), and one involving cathodal stimulation (to temporarily decrease cortical activity locally), followed by anodal stimulation. Results demonstrated that anodal tDCS over the temporal cortex increased MMN-indexed auditory detection of pitch deviance, and while cathodal tDCS decreased auditory discrimination in baseline-stratified groups, subsequent anodal stimulation did not significantly alter MMN amplitudes. These findings strengthen the position that tDCS effects on cognition extend to the neural processing of sensory input and raise the possibility that this neuromodulatory technique may be useful for investigating sensory processing deficits in clinical populations. PMID:27054908
Christison-Lagay, Kate L.; Cohen, Yale E.
Perceptual representations of auditory stimuli (i.e., sounds) are derived from the auditory system’s ability to segregate and group the spectral, temporal, and spatial features of auditory stimuli—a process called “auditory scene analysis”. Psychophysical studies have identified several of the principles and mechanisms that underlie a listener’s ability to segregate and group acoustic stimuli. One important psychophysical task that has illuminated many of these principles and mechanisms is th...
Booth, J R; MacWhinney, B; Harasaki, Y
Children aged 8 through 11 (N = 250) were given a word-by-word sentence task in both the visual and auditory modes. The sentences included an object relative clause, a subject relative clause, or a conjoined verb phrase. Each sentence was followed by a true-false question, testing the subject of either the first or second verb. Participants were also given two memory span measures: digit span and reading span. High digit span children slowed down more at the transition from the main to the relative clause than did the low digit span children. The findings suggest the presence of a U-shaped learning pattern for on-line processing of restrictive relative clauses. Off-line accuracy scores showed different patterns for good comprehenders and poor comprehenders. Poor comprehenders answered the second verb questions at levels that were consistently below chance. Their answers were based on an incorrect local attachment strategy that treated the second noun as the subject of the second verb. For example, they often answered yes to the question "The girl chases the policeman" after the object relative sentence "The boy that the girl sees chases the policeman." Interestingly, low memory span poor comprehenders used the local attachment strategy less consistently than high memory span poor comprehenders, and all poor comprehenders used this strategy less consistently for harder than for easier sentences. PMID:11016560
Two challenges that face popular self-monitoring theories (SMTs) of auditory verbal hallucination (AVH) are that they cannot account for the auditory phenomenology of AVHs and that they cannot account for their variety. In this paper I show that both challenges can be met by adopting a predictive processing framework (PPF), and by viewing AVHs as arising from abnormalities in predictive processing. I show how, within the PPF, both the auditory phenomenology of AVHs, and three subtypes of AVH, can be accounted for. PMID:25286243
McArthur, G M; Bishop, D V M
An influential theory attributes developmental disorders of language and literacy to low-level auditory perceptual difficulties. However, evidence to date has been inconsistent and contradictory. We investigated whether this mixed picture could be explained in terms of heterogeneity in the language-impaired population. In Experiment 1, the behavioural responses of 16 people with specific language impairment (SLI) and 16 control listeners (aged 10 to 19 years) to auditory backward recognition masking (ABRM) stimuli and unmasked tones indicated that a subgroup of people with SLI are less able to discriminate between the frequencies of sounds regardless of their rate of presentation. Further, these people tended to be the younger participants, and were characterised by relatively poor nonword reading. In Experiment 2, the auditory event-related potentials (ERPs) of the same groups to unmasked tones were measured. Listeners with SLI tended to have age-inappropriate waveforms in the N1-P2-N2 region, regardless of their auditory discrimination scores in Experiment 1. Together, these results suggest that SLI may be characterised by immature development of auditory cortex, such that adult-level frequency discrimination performance is attained several years later than normal. PMID:21038192
DAWES, P; Bishop, DV
OBJECTIVE: The aim was to address the controversy that exists over the extent to which auditory processing disorder (APD) is a separate diagnostic category with a distinctive psychometric profile, rather than a reflection of a more general learning disability. METHODS: Children with an APD diagnosis (N=25) were compared with children with dyslexia (N=19) on a battery of standardised auditory processing, language, literacy and non-verbal intelligence quotient measures as well as parental repor...
Ferguson, Melanie A.
The aims of this research were to identify and compare auditory processing, speech intelligibility, cognitive, listening, language and communication abilities in (i) typically developing, mainstream school (MS) children (n = 122) for direct comparison with (ii) children presenting to clinical services with auditory processing disorder (APD) (n = 19) or specific language impairment (SLI) (n = 22), and in (iii) a large population sample (n = 1469) who were categorised by their functional listen...
Yokota, Ryo; Aihara, Kazuyuki; Kanzaki, Ryohei; Takahashi, Hirokazu
Temporal coherence among neural populations may contribute importantly to signal encoding, specifically by providing an optimal tradeoff between encoding reliability and efficiency. Here, we considered the possibility that learning modulates the temporal coherence among neural populations in association with well-characterized map plasticity. We previously demonstrated that, in appetitive operant conditioning tasks, the tone-responsive area globally expanded during the early stage of learning, but shrank during the late stage. The present study further showed that phase locking of the first spike to band-specific oscillations of local field potentials (LFPs) significantly increased during the early stage of learning but decreased during the late stage, suggesting that neurons in A1 were more synchronously activated during early learning, whereas they were more asynchronously activated once learning was completed. Furthermore, LFP amplitudes increased during early learning but decreased during later learning. These results suggest that, compared to naïve encoding, early-stage encoding is more reliable but energy-consumptive, whereas late-stage encoding is more energetically efficient. Such a learning-stage-dependent encoding strategy may underlie learning-induced, non-monotonic map plasticity. Accumulating evidence indicates that the cholinergic system is likely to be a shared neural substrate of the processes for perceptual learning and attention, both of which modulate neural encoding in an adaptive manner. Thus, a better understanding of the links between map plasticity and modulation of temporal coherence will likely lead to a more integrated view of learning and attention. PMID:24615394
Wayland, Ratree; Lombardino, Linda
It has been estimated that approximately 5%-9% of school-aged children in the United States are diagnosed with some kind of learning disorders. Moreover, previous research has established that many of these children exhibited perceptual deficits in response to auditory stimuli, suggesting that an auditory perceptual deficit may underlie their learning disabilities. The goal of this research is to examine the ability to auditorily process speech and nonspeech stimuli among language-learning disabled (LLD) children and adults. The two questions that will be addressed in this study are: (a) Are there subtypes of LLD children/adults based on their auditory processing deficit, and (b) Is there any relationship between types of auditory processing deficits and types of language deficits as measured by a battery of psychoeducational tests.
Liberalesso Paulo Breno
Full Text Available Abstract Background Sleep deprivation is extremely common in contemporary society, and is considered to be a frequent cause of behavioral disorders, mood, alertness, and cognitive performance. Although the impacts of sleep deprivation have been studied extensively in various experimental paradigms, very few studies have addressed the impact of sleep deprivation on central auditory processing (CAP. Therefore, we examined the impact of sleep deprivation on CAP, for which there is sparse information. In the present study, thirty healthy adult volunteers (17 females and 13 males, aged 30.75 ± 7.14 years were subjected to a pure tone audiometry test, a speech recognition threshold test, a speech recognition task, the Staggered Spondaic Word Test (SSWT, and the Random Gap Detection Test (RGDT. Baseline (BSL performance was compared to performance after 24 hours of being sleep deprived (24hSD using the Student’s t test. Results Mean RGDT score was elevated in the 24hSD condition (8.0 ± 2.9 ms relative to the BSL condition for the whole cohort (6.4 ± 2.8 ms; p = 0.0005, for males (p = 0.0066, and for females (p = 0.0208. Sleep deprivation reduced SSWT scores for the whole cohort in both ears [(right: BSL, 98.4 % ± 1.8 % vs. SD, 94.2 % ± 6.3 %. p = 0.0005(left: BSL, 96.7 % ± 3.1 % vs. SD, 92.1 % ± 6.1 %, p Conclusion Sleep deprivation impairs RGDT and SSWT performance. These findings confirm that sleep deprivation has central effects that may impair performance in other areas of life.
Ji, Yoon Ha; Youn, Eun Kyung; Kim, Seung Chul [Sungkyunkwan Univ., School of Medicine, Seoul (Korea, Republic of)
To identify and evaluate the normal anatomy of nerve canals in the fundus of the internal auditory canal which can be visualized on high-resolution temporal bone CT. We retrospectively reviewed high-resolution (1 mm thickness and interval contiguous scan) temporal bone CT images of 253 ears in 150 patients who had not suffered trauma or undergone surgery. Those with a history of uncomplicated inflammatory disease were included, but those with symptoms of vertigo, sensorineural hearing loss, or facial nerve palsy were excluded. Three radiologists determined the detectability and location of canals for the labyrinthine segment of the facial, superior vestibular and cochlear nerve, and the saccular branch and posterior ampullary nerve of the inferior vestibular nerve. Five bony canals in the fundus of the internal auditory canal were identified as nerve canals. Four canals were identified on axial CT images in 100% of cases; the so-called singular canal was identified in only 68%. On coronal CT images, canals for the labyrinthine segment of the facial and superior vestibular nerve were seen in 100% of cases, but those for the cochlear nerve, the saccular branch of the inferior vestibular nerve, and the singular canal were seen in 90.1%, 87.4% and 78% of cases, respectiveIy. In all detectable cases, the canal for the labyrinthine segment of the facial nerve was revealed as one which traversed anterolateralIy, from the anterosuperior portion of the fundus of the internal auditory canal. The canal for the cochlear nerve was located just below that for the labyrinthine segment of the facial nerve, while that canal for the superior vestibular nerve was seen at the posterior aspect of these two canals. The canal for the saccular branch of the inferior vestibular nerve was located just below the canal for the superior vestibular nerve, and that for the posterior ampullary nerve, the so-called singular canal, ran laterally or posteolateralIy from the posteroinferior aspect of
Laing, Erika J C; Liu, Ran; Lotto, Andrew J; Holt, Lori L
Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker's speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS) of a talker's speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences' LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by non-speech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization. PMID:22737140
Erika J C Laing
Full Text Available Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker’s speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS of a talker’s speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences’ LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by nonspeech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.
Elizabeth C Hames
Full Text Available Electroencephalography (EEG and Blood Oxygen Level Dependent Functional Magnetic Resonance Imagining (BOLD fMRI assessed the neurocorrelates of sensory processing of visual and auditory stimuli in 11 adults with autism (ASD and 10 neurotypical (NT controls between the ages of 20-28. We hypothesized that ASD performance on combined audiovisual trials would be less accurate with observable decreased EEG power across frontal, temporal, and occipital channels and decreased BOLD fMRI activity in these same regions; reflecting deficits in key sensory processing areas. Analysis focused on EEG power, BOLD fMRI, and accuracy. Lower EEG beta power and lower left auditory cortex fMRI activity were seen in ASD compared to NT when they were presented with auditory stimuli as demonstrated by contrasting the activity from the second presentation of an auditory stimulus in an all auditory block versus the second presentation of a visual stimulus in an all visual block (AA2VV2. We conclude that in ASD, combined audiovisual processing is more similar than unimodal processing to NTs.
Full Text Available Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC and adjacent inferior parietal lobule (IPL to sounds varying in horizontal location, distance, or space (i.e., different rooms. In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task.
Macías, Silvio; Hechavarría, Julio C; Kössl, Manfred
During echolocation, bats estimate distance to avoid obstacles and capture moving prey. The primary distance cue is the delay between the bat's emitted echolocation pulse and the return of an echo. In the bat's auditory system, echo delay-tuned neurons that only respond to pulse-echo pairs having a specific echo delay serve target distance calculation. Accurate prey localization should benefit from the spike precision in such neurons. Here we show that delay-tuned neurons in the inferior colliculus of the mustached bat respond with higher temporal precision, shorter latency and shorter response duration than those of the auditory cortex. Based on these characteristics, we suggest that collicular neurons are best suited for a fast and accurate response that could lead to fast behavioral reactions while cortical neurons, with coarser temporal precision and longer latencies and response durations could be more appropriate for integrating acoustic information over time. The latter could be important for the formation of biosonar images. PMID:26785850
Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A
The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates that...
Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Gregg H Recanzone
The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and dri...
Beate Sabisch; Benjamin Weiss; Barry, Johanna G.
Efficient auditory processing is hypothesized to support language and literacy development. However, behavioral tasks used to assess this hypothesis need to be robust to non-auditory specific individual differences. This study compared frequency discrimination abilities in a heterogeneous sample of adults using two different psychoacoustic task designs, referred to here as: 2I_6A_X and 3I_2AFC designs. The role of individual differences in nonverbal IQ (NVIQ), socioeconomic status (SES) and m...
Tânia Tochetto; Luciane da Costa Pacheco; Celina Rech Maggi; Fleming Salvador Pedroso
Objective: To check the existence of an association between the presence/absence of the blink reflex habituation in the neonatal period and auditory processing development. Methods: The occurrence of blink reflex habituation was studied in 33 neurologically normal neonates, aged between 9 and 25 months, who had their behavioral responses analyzed and classified according to Azevedo (1993). Habituation of the blink reflex was verified using 90-dB sound stimulus. The stage of auditory processin...
Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.;
human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...... vocalization, compared with during passive listening. One network of regions appears to encode an “error signal” regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across...... presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple...
Marsh, J. E.; Hughes, Rob; Jones, D M
Distraction by irrelevant background sound of visually-based cognitive tasks illustrates the vulnerability of attentional selectivity across modalities. Four experiments centred on auditory distraction during tests of memory for visually-presented semantic information. Meaningful irrelevant speech disrupted the free recall of semantic category-exemplars more than meaningless irrelevant sound (Experiment 1). This effect was exacerbated when the irrelevant speech was semantically related to the...
Fritz, Jonathan B; Malloy, Megan; Mishkin, Mortimer; Saunders, Richard C
While monkeys easily acquire the rules for performing visual and tactile delayed matching-to-sample, a method for testing recognition memory, they have extraordinary difficulty acquiring a similar rule in audition. Another striking difference between the modalities is that whereas bilateral ablation of the rhinal cortex (RhC) leads to profound impairment in visual and tactile recognition, the same lesion has no detectable effect on auditory recognition memory (Fritz et al., 2005). In our previous study, a mild impairment in auditory memory was obtained following bilateral ablation of the entire medial temporal lobe (MTL), including the RhC, and an equally mild effect was observed after bilateral ablation of the auditory cortical areas in the rostral superior temporal gyrus (rSTG). In order to test the hypothesis that each of these mild impairments was due to partial disconnection of acoustic input to a common target (e.g., the ventromedial prefrontal cortex), in the current study we examined the effects of a more complete auditory disconnection of this common target by combining the removals of both the rSTG and the MTL. We found that the combined lesion led to forgetting thresholds (performance at 75% accuracy) that fell precipitously from the normal retention duration of ~30 to 40s to a duration of ~1 to 2s, thus nearly abolishing auditory recognition memory, and leaving behind only a residual echoic memory. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26707975
Yanaga, Ryuichiro; Kawahara, Hideki
A new parameter extraction procedure based on logarithmic transformation of the temporal axis was applied to investigate auditory effects on voice F0 control to overcome artifacts due to natural fluctuations and nonlinearities in speech production mechanisms. The proposed method may add complementary information to recent findings reported by using frequency shift feedback method [Burnett and Larson, J. Acoust. Soc. Am. 112 (2002)], in terms of dynamic aspects of F0 control. In a series of experiments, dependencies of system parameters in F0 control on subjects, F0 and style (musical expressions and speaking) were tested using six participants. They were three male and three female students specialized in musical education. They were asked to sustain a Japanese vowel /a/ for about 10 s repeatedly up to 2 min in total while hearing F0 modulated feedback speech, that was modulated using an M-sequence. The results replicated qualitatively the previous finding [Kawahara and Williams, Vocal Fold Physiology, (1995)] and provided more accurate estimates. Relations with designing an artificial singer also will be discussed. [Work partly supported by the grant in aids in scientific research (B) 14380165 and Wakayama University.
YANG Li-jun; CAO Ke-li; WEI Chao-gang; LIU Yong-zhi
Background Chinese tones are considered important in Chinese discrimination.However,the relevant reports on auditory central mechanisms concerning Chinese tones are limited.In this study,mismatch negativity (MMN),one of the event related potentials (ERP),was used to investigate pre-attentive processing of Chinese tones,and the differences between the function of oddball MMN and that of control MMN are discussed.Methods Ten subjects (six men and four women) with normal hearing participated in the study.A sequence was presented to these subjects through a loudspeaker,the sequence included four blocks,a control block and three oddball blocks.The control block was made up of five components (one pure tone and four Chinese tones) with equiprobability.The oddball blocks were made up of two components,one was a standard stimulus (tone 1) and the other was a deviant stimulus (tone 2 or tone 3 or tone 4).Electroencephalogram (EEG) data were recorded when the sequence was presented and MMNs were obtained from the analysis of the EEG data.Results Two kinds of MMNs were obtained,oddball MMN and control MMN.Oddball MMN was obtained by subtracting the ERP elicited by standard stimulation (tone 1) from that elicited by deviant stimulation (tone 2 or tone 3 or tone 4) in the oddball block; control MMN was obtained by subtracting the ERP elicited by the tone in control block,which was the same tone as the deviant stimulation in the oddball block,from the ERP elicited by deviant stimulation (tone 2 or tone 3 or tone 4)in the oddball block.There were two negative waves in oddball MMN,one appeared around 150 ms (oddball MMN 1),the other around 300 ms (oddball MMN 2).Only one negative wave appeared around 300 ms in control MMN,which was corresponding to the oddball MMN 2.We performed the statistical analyses in each paradigm for latencies and amplitudes for oddball MMN 2 in discriminating the three Chinese tones and reported no significant differences.But the latencies and amplitudes
Pincham, Hannah L.; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes
Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506
Ronconi, Luca; Pincham, Hannah L; Cristoforetti, Giulia; Facoetti, Andrea; Szűcs, Dénes
Human attention fluctuates across time, and even when stimuli have identical physical characteristics and the task demands are the same, relevant information is sometimes consciously perceived and at other times not. A typical example of this phenomenon is the attentional blink, where participants show a robust deficit in reporting the second of two targets (T2) in a rapid serial visual presentation (RSVP) stream. Previous electroencephalographical (EEG) studies showed that neural correlates of correct T2 report are not limited to the RSVP period, but extend before visual stimulation begins. In particular, reduced oscillatory neural activity in the alpha band (8-12 Hz) before the onset of the RSVP has been linked to lower T2 accuracy. We therefore examined whether auditory rhythmic stimuli presented at a rate of 10 Hz (within the alpha band) could increase oscillatory alpha-band activity and improve T2 performance in the attentional blink time window. Behaviourally, the auditory rhythmic stimulation worked to enhance T2 accuracy. This enhanced perception was associated with increases in the posterior T2-evoked N2 component of the event-related potentials and this effect was observed selectively at lag 3. Frontal and posterior oscillatory alpha-band activity was also enhanced during auditory stimulation in the pre-RSVP period and positively correlated with T2 accuracy. These findings suggest that ongoing fluctuations can be shaped by sensorial events to improve the allocation of attention in time. PMID:26986506
Rauschecker, Josef P; Tian, Biao
The functional specialization and hierarchical organization of multiple areas in rhesus monkey auditory cortex were examined with various types of complex sounds. Neurons in the lateral belt areas of the superior temporal gyrus were tuned to the best center frequency and bandwidth of band-passed noise bursts. They were also selective for the rate and direction of linear frequency modulated sweeps. Many neurons showed a preference for a limited number of species-specifi...
Kermani, Hamed; Dehghani, Nima; Aghdashi, Farzad; Esmaeelinejad, Mohammad
Introduction: Fracture of the styloid process (SP) of the temporal bone is a rare traumatic injury in normal individuals who are not suffering from Eagle’s syndrome. Diagnosis and management of this problem requires comprehensive knowledge about its signs and symptoms. This study aimed to present an isolated styloid process fracture in a nonsyndromic patient. Case Presentation: A 50-year-old male patient was referred to our department with a complaint of sore throat. However, presentation of the problem resembled the symptoms of temporomandibular joint disorder (TMD). Fracture of the SP of the temporal bone was detected on the radiographs. Conservative treatment was undertaken for the patient. The symptoms diminished after about four months. Conclusions: Physicians should be aware of the signs and symptoms of different pain sources to prevent misdiagnosis and maltreatment.
Full Text Available Background and Aim: Specific language impairment (SLI, one variety of developmental language disorder, has attracted much interest in recent decades. Much research has been conducted to discover why some children have a specific language impairment. So far, research has failed to identify a reason for this linguistic deficiency. Some researchers believe language disorder causes defects in phonological working memory and affects auditory processing speed. Therefore, this study reviews the results of research investigating these two factors in children with specific language impairment.Recent Findings: Studies have shown that children with specific language impairment face constraints in phonological working memory capacity. Memory deficit is one possible cause of linguistic disorder in children with specific language impairment. However, in these children, disorder in information processing speed is observed, especially regarding the auditory aspect.Conclusion: Much more research is required to adequately explain the relationship between phonological working memory and auditory processing speed with language. However, given the role of phonological working memory and auditory processing speed in language acquisition, a focus should be placed on phonological working memory capacity and auditory processing speed in the assessment and treatment of children with a specific language impairment.
Viola Andresen; Peter Kobelt; Claus Zimmer; Bertram Wiedenmann; Burghard F Klapp; Hubert Monnikes; Alexander Poellinger; Chedwa Tsrouya; Dominik Bach; Albrecht Stroh; Annette Foerschler; Petra Georgiewa; Marco Schmidtmann; Ivo R van der Voort
AIM: To determine by brain functional magnetic resonance imaging (fMRI) whether cerebral processing of non-visceral stimuli is altered in irritable bowel syndrome (IBS) patients compared with healthy subjects. To circumvent spinal viscerosomatic convergence mechanisms,we used auditory stimulation, and to identify a possible influence of psychological factors the stimuli differed in their emotional quality.METHODS: In 8 IBS patients and 8 controls, fMRI measurements were performed using a block design of 4 auditory stimuli of different emotional quality (pleasant sounds of chimes, unpleasant peep (2000 Hz), neutral words, and emotional words). A gradient echo T2*-weighted sequence was used for the functional scans.Statistical maps were constructed using the general linear model.RESULTS: To emotional auditory stimuli, IBS patients relative to controls responded with stronger deactivations in a greater variety of emotional processing regions, while the response patterns, unlike in controls, did not differentiate between distressing or pleasant sounds.To neutral auditory stimuli, by contrast, only IBS patients responded with large significant activations.CONCLUSION: Altered cerebral response patterns to auditory stimuli in emotional stimulus-processing regions suggest that altered sensory processing in IBS may not be specific for visceral sensation, but might reflect generalized changes in emotional sensitivity and affectire reactivity, possibly associated with the psychological comorbidity often found in IBS patients.
Full Text Available Temporal-order judgment (TOJ tasks are an important paradigm to investigate processing times of information in different modalities. There are a lot of studies on how temporal order decisions can be influenced by stimuli characteristics. However, so far it has not been investigated whether the addition of a choice reaction time task has an influence on temporal-order judgment. Moreover, it is not known when during processing the decision about the temporal order of two stimuli is made. We investigated the first of these two questions by comparing a regular TOJ task with a dual task. In both tasks, we manipulated different processing stages to investigate whether the manipulations have an influence on temporal-order judgment and to determine thereby the time of processing at which the decision about temporal order is made. The results show that the addition of a choice reaction time task does have an influence on the temporal-order judgment, but the influence seems to be linked to the kind of manipulation of the processing stages that is used. The results of the manipulations indicate that the temporal order decision in the dual task paradigm is made after perceptual processing of the stimuli.
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Both the middle temporal gyrus and the ventral anterior temporal area are crucial for multimodal semantic processing: Distortion-corrected fMRI evidence for a double gradient of information convergence in the temporal lobes.
M. Visser, E. Jefferies, K. Embleton, & M.A. Lambon Ralph
Most contemporary theories of semantic memory assume that concepts are formed from the distillation of information arising in distinct sensory and verbal modalities. The neural basis of this distillation or convergence of information was the focus of this study. Specifically, we explored two commonly posed hypotheses: (a) that the human middle temporal gyrus (MTG) provides a crucial semantic interface given the fact that it interposes auditory and visual processing streams and (b) that the an...
Thomson, Jennifer M.; Leong, Victoria; Goswami, Usha
The purpose of this study was to compare the efficacy of two auditory processing interventions for developmental dyslexia, one based on rhythm and one based on phonetic training. Thirty-three children with dyslexia participated and were assigned to one of three groups (a) a novel rhythmic processing intervention designed to highlight auditory…
Jepsen, Morten Løve
A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing-instrument...... was shown that an accurate simulation of cochlear input-output functions, in addition to the audiogram, played a major role in accounting both for sensitivity and supra-threshold processing. Finally, the model was used as a front-end in a framework developed to predict consonant discrimination in a...
Wang, Hsiao-Lan S; Chen, I-Chen; Chiang, Chun-Han; Lai, Ying-Hui; Tsao, Yu
The current study examined the associations between basic auditory perception, speech prosodic processing, and vocabulary development in Chinese kindergartners, specifically, whether early basic auditory perception may be related to linguistic prosodic processing in Chinese Mandarin vocabulary acquisition. A series of language, auditory, and linguistic prosodic tests were given to 100 preschool children who had not yet learned how to read Chinese characters. The results suggested that lexical tone sensitivity and intonation production were significantly correlated with children's general vocabulary abilities. In particular, tone awareness was associated with comprehensive language development, whereas intonation production was associated with both comprehensive and expressive language development. Regression analyses revealed that tone sensitivity accounted for 36% of the unique variance in vocabulary development, whereas intonation production accounted for 6% of the variance in vocabulary development. Moreover, auditory frequency discrimination was significantly correlated with lexical tone sensitivity, syllable duration discrimination, and intonation production in Mandarin Chinese. Also it provided significant contributions to tone sensitivity and intonation production. Auditory frequency discrimination may indirectly affect early vocabulary development through Chinese speech prosody. PMID:27519239
Prather, Jonathan F.
Learning and maintaining the sounds we use in vocal communication require accurate perception of the sounds we hear performed by others and feedback-dependent imitation of those sounds to produce our own vocalizations. Understanding how the central nervous system integrates auditory and vocal-motor information to enable communication is a fundamental goal of systems neuroscience, and insights into the mechanisms of those processes will profoundly enhance clinical therapies for communication disorders. Gaining the high-resolution insight necessary to define the circuits and cellular mechanisms underlying human vocal communication is presently impractical. Songbirds are the best animal model of human speech, and this review highlights recent insights into the neural basis of auditory perception and feedback-dependent imitation in those animals. Neural correlates of song perception are present in auditory areas, and those correlates are preserved in the auditory responses of downstream neurons that are also active when the bird sings. Initial tests indicate that singing-related activity in those downstream neurons is associated with vocal-motor performance as opposed to the bird simply hearing itself sing. Therefore, action potentials related to auditory perception and action potentials related to vocal performance are co-localized in individual neurons. Conceptual models of song learning involve comparison of vocal commands and the associated auditory feedback to compute an error signal that is used to guide refinement of subsequent song performances, yet the sites of that comparison remain unknown. Convergence of sensory and motor activity onto individual neurons points to a possible mechanism through which auditory and vocal-motor signals may be linked to enable learning and maintenance of the sounds used in vocal communication. PMID:23827717
Gavin M. Bidelman
Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.
Mann, David A; Colbert, Debborah E; Gaspard, Joseph C; Casper, Brandon M; Cook, Mandy L H; Reep, Roger L; Bauer, Gordon B
Auditory evoked potential (AEP) measurements of two Florida manatees (Trichechus manatus latirostris) were measured in response to amplitude modulated tones. The AEP measurements showed weak responses to test stimuli from 4 kHz to 40 kHz. The manatee modulation rate transfer function (MRTF) is maximally sensitive to 150 and 600 Hz amplitude modulation (AM) rates. The 600 Hz AM rate is midway between the AM sensitivities of terrestrial mammals (chinchillas, gerbils, and humans) (80-150 Hz) and dolphins (1,000-1,200 Hz). Audiograms estimated from the input-output functions of the EPs greatly underestimate behavioral hearing thresholds measured in two other manatees. This underestimation is probably due to the electrodes being located several centimeters from the brain. PMID:16001184
Christiansen, Thomas Ulrich; Dau, Torsten; Greenberg, Steven
Hearing – From Sensory Processing to Perception presents the papers of the latest "International Symposium on Hearing," a meeting held every three years focusing on psychoacoustics and the research of the physiological mechanisms underlying auditory perception. The proceedings provide an up...... physiological mechanisms of binaural processing in mammals; integration of the different stimulus features into auditory scene analysis; physiological mechanisms related to the formation of auditory objects; speech perception; and limitations of auditory perception resulting from hearing disorders....
Full Text Available Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave, while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.
Krueger Fister, Juliane; Stevenson, Ryan A; Nidiffer, Aaron R; Barnett, Zachary P; Wallace, Mark T
One of the more challenging feats that multisensory systems must perform is to determine which sensory signals originate from the same external event, and thus should be integrated or "bound" into a singular perceptual object or event, and which signals should be segregated. Two important stimulus properties impacting this process are the timing and effectiveness of the paired stimuli. It has been well established that the more temporally aligned two stimuli are, the greater the degree to which they influence one another's processing. In addition, the less effective the individual unisensory stimuli are in eliciting a response, the greater the benefit when they are combined. However, the interaction between stimulus timing and stimulus effectiveness in driving multisensory-mediated behaviors has never been explored - which was the purpose of the current study. Participants were presented with either high- or low-intensity audiovisual stimuli in which stimulus onset asynchronies (SOAs) were parametrically varied, and were asked to report on the perceived synchrony/asynchrony of the paired stimuli. Our results revealed an interaction between the temporal relationship (SOA) and intensity of the stimuli. Specifically, individuals were more tolerant of larger temporal offsets (i.e., more likely to call them synchronous) when the paired stimuli were less effective. This interaction was also seen in response time (RT) distributions. Behavioral gains in RTs were seen with synchronous relative to asynchronous presentations, but this effect was more pronounced with high-intensity stimuli. These data suggest that stimulus effectiveness plays an underappreciated role in the perception of the timing of multisensory events, and reinforces the interdependency of the principles of multisensory integration in determining behavior and shaping perception. PMID:26920937
Full Text Available Selective attention has traditionally been viewed as a sensory processing modulator that promotes cognitive processing efficiency by favoring relevant stimuli while inhibiting irrelevant stimuli. However, the cross-modal processing of irrelevant information during working memory (WM has been rarely investigated. In this study, the modulation of irrelevant auditory information by the brain during a visual WM task was investigated. The N100 auditory evoked potential (N100-AEP following an auditory click was used to evaluate the selective attention to auditory stimulus during WM processing and at rest. N100-AEP amplitudes were found to be significantly affected in the left-prefrontal, mid-prefrontal, right-prefrontal, left-frontal, and mid-frontal regions while performing a high WM load task. In contrast, no significant differences were found between N100-AEP amplitudes in WM states and rest states under a low WM load task in all recorded brain regions. Furthermore, no differences were found between the time latencies of N100-AEP troughs in WM states and rest states while performing either the high or low WM load task. These findings suggested that the prefrontal cortex (PFC may integrate information from different sensory channels to protect perceptual integrity during cognitive processing.
Kempe, Vera; Thoresen, John C; Kirk, Neil W; Schaeffler, Felix; Brooks, Patricia J
This study examined whether rapid temporal auditory processing, verbal working memory capacity, non-verbal intelligence, executive functioning, musical ability and prior foreign language experience predicted how well native English speakers (N=120) discriminated Norwegian tonal and vowel contrasts as well as a non-speech analogue of the tonal contrast and a native vowel contrast presented over noise. Results confirmed a male advantage for temporal and tonal processing, and also revealed that temporal processing was associated with both non-verbal intelligence and speech processing. In contrast, effects of musical ability on non-native speech-sound processing and of inhibitory control on vowel discrimination were not mediated by temporal processing. These results suggest that individual differences in non-native speech-sound processing are to some extent determined by temporal auditory processing ability, in which males perform better, but are also determined by a host of other abilities that are deployed flexibly depending on the characteristics of the target sounds. PMID:23139806
Robson, Holly; Grube, Manon; Lambon Ralph, Matthew; Griffiths, Timothy; Sage, Karen
Objective: This work investigates the nature of the comprehension impairment in Wernicke’s aphasia, by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. Wernicke’s aphasia, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated wit...
Wallach, Geraldine P.
Purpose: This article addresses auditory processing disorder (APD) from a language-based perspective. The author asks speech-language pathologists to evaluate the functionality (or not) of APD as a diagnostic category for children and adolescents with language-learning and academic difficulties. Suggestions are offered from a…
Miller, Carol A.; Wagstaff, David A.
Purpose: To describe and compare behavioral profiles associated with auditory processing disorder (APD) and specific language impairment (SLI) in school-age children. Method: The participants in this cross-sectional observational study were 64 children (mean age 10.1 years) recruited through clinician referrals. Thirty-five participants had a…
Loo, Jenny Hooi Yin; Bamiou, Doris-Eva; Rosen, Stuart
Purpose: To examine the impact of language background and language-related disorders (LRDs--dyslexia and/or language impairment) on performance in English speech and nonspeech tests of auditory processing (AP) commonly used in the clinic. Method: A clinical database concerning 133 multilingual children (mostly with English as an additional…
Auditory processing disorders (APDs) are of interest to educators and clinicians, as they impact school functioning. Little work has been completed to demonstrate how children with APDs perform on clinical tests. In a series of studies, standard clinical (psychometric) tests from the Wechsler Intelligence Scale for Children, Fourth Edition…
Full Text Available Abstract Background Otitis media (OM leads to significant reduction in the hearing sensitivity. The reduced auditory input, if in the early years of life when the auditory neural system is still maturing, may adversely influence the structural as well as functional development of the system. Past research has reported abnormalities in both the structure and function of brainstem nuclei following auditory deprivation, but, it has not necessarily focused on children who had OM in their first year of life. It can also be said that if auditory processing is affected at the brainstem level because of early onset OM (reduced auditory input in the crucial periods of neural development, then, it may be said that auditory processing is also affected at the cortical level because it receives distorted input from the brainstem. Therefore, the purpose of this study was to document the effects of early onset OM on auditory processing, if any, at the brainstem as well as at cortical levels. A related purpose of the study was to investigate the persistence of the effects of early onset OM, if any, on auditory processing. Methods A cross sectional approach and a standard group comparison design was used in the study. Thirty children, who had OM between 6 and 12 months of age and who were in the age range of 3.1 – 5.6 years participated in the study. Children with OM were divided into 3 groups based on their age. Click evoked auditory brainstem responses (ABRs and late latency responses (LLRs were recorded from these children, and the responses were compared with those from age and gender matched normal children without any history of OM. The data from the 2 groups was statistically analyzed through independent t test. Pearson's Product Moment correlation was computed to examine the relationship between results of ABR and LLR in children with early onset OM. Results The mean central conduction time was significantly increased and the mean amplitude of wave I
Henkin, Yael; Yaar-Soffer, Yifat; Steinberg, Meidan; Muchnik, Chava
With the growing number of older adults receiving cochlear implants (CI), there is general agreement that substantial benefits can be gained. Nonetheless, variability in speech perception performance is high, and the relative contribution and interactions among peripheral, central-auditory, and cognitive factors are not fully understood. The goal of the present study was to compare auditory-cognitive processing in older-adult CI recipients with that of older normal-hearing (NH) listeners by means of behavioral and electrophysiologic manifestations of a high-load cognitive task. Auditory event-related potentials (AERPs) were recorded from 9 older postlingually deafened adults with CI (age at CI >60) and 10 age-matched listeners with NH, while performing an auditory Stroop task. Participants were required to classify the speaker's gender (male/female) that produced the words 'mother' or 'father' while ignoring the irrelevant congruent or incongruent word meaning. Older CI and NH listeners exhibited comparable reaction time, performance accuracy, and initial sensory-perceptual processing (i.e. N1 potential). Nonetheless, older CI recipients showed substantially prolonged and less efficient perceptual processing (i.e. P3 potential). Congruency effects manifested in longer reaction time (i.e. Stroop effect), execution time, and P3 latency to incongruent versus congruent stimuli in both groups in a similar fashion; however, markedly prolonged P3 and shortened execution time were evident in older CI recipients. Collectively, older adults (CI and NH) employed a combined perceptual and postperceptual conflict processing strategy; nonetheless, the relative allotment of perceptual resources was substantially enhanced to maintain adequate performance in CI recipients. In sum, the recording of AERPs together with the simultaneously obtained behavioral measures during a Stroop task exposed a differential time course of auditory-cognitive processing in older CI recipients that
Maria Luisa eLorusso
Full Text Available The nature of Rapid Auditory Processing (RAP deficits in dyslexia remains debated, together with the specificity of the problem to certain types of stimuli and/or restricted subgroups of individuals. Following the hypothesis that the heterogeneity of the dyslexic population may have led to contrasting results, the aim of the study was to define the effect of age, dyslexia subtype and comorbidity on the discrimination and reproduction of nonverbal tone sequences.Participants were 46 children aged 8 - 14 (26 with dyslexia, subdivided according to age, presence of a previous language delay, and type of dyslexia. Experimental tasks were a Temporal Order Judgment (TOJ (manipulating tone length, ISI and sequence length, and a Pattern Discrimination Task. Dyslexic children showed general RAP deficits. Tone length and ISI influenced dyslexic and control children’s performance in a similar way, but dyslexic children were more affected by an increase from 2 to 5 sounds. As to age, older dyslexic children’s difficulty in reproducing sequences of 4 and 5 tones was similar to that of normally reading younger (but not older children. In the analysis of subgroup profiles, the crucial variable appears to be the advantage, or lack thereof, in processing long vs short sounds. Dyslexic children with a previous language delay obtained the lowest scores in RAP measures, but they performed worse with shorter stimuli, similar to control children, while dyslexic-only children showed no advantage for longer stimuli. As to dyslexia subtype, only surface dyslexics improved their performance with longer stimuli, while phonological dyslexics did not. Differential scores for short vs long tones and for long vs short ISIs predict nonword and word reading, respectively, and the former correlate with phonemic awareness.In conclusion, the relationship between nonverbal RAP, phonemic skills and reading abilities appears to be characterized by complex interactions with
Sanchez, Jason T.; Seidl, Armin H.; Rubel, Edwin W.; Barria, Andres
The chicken auditory brainstem is a well-established model system that has been widely used to study the anatomy and physiology of auditory processing at discreet periods of development 1-4 as well as mechanisms for temporal coding in the central nervous system 5-7.
Purdy, Suzanne C; Wanigasekara, Iruni; Cañete, Oscar M; Moore, Celia; McCann, Clare M
Aphasia is an acquired language impairment affecting speaking, listening, reading, and writing. Aphasia occurs in about a third of patients who have ischemic stroke and significantly affects functional recovery and return to work. Stroke is more common in older individuals but also occurs in young adults and children. Because people experiencing a stroke are typically aged between 65 and 84 years, hearing loss is common and can potentially interfere with rehabilitation. There is some evidence for increased risk and greater severity of sensorineural hearing loss in the stroke population and hence it has been recommended that all people surviving a stroke should have a hearing test. Auditory processing difficulties have also been reported poststroke. The International Classification of Functioning, Disability and Health (ICF) can be used as a basis for describing the effect of aphasia, hearing loss, and auditory processing difficulties on activities and participation. Effects include reduced participation in activities outside the home such as work and recreation and difficulty engaging in social interaction and communicating needs. A case example of a young man (M) in his 30s who experienced a left-hemisphere ischemic stroke is presented. M has normal hearing sensitivity but has aphasia and auditory processing difficulties based on behavioral and cortical evoked potential measures. His principal goal is to return to work. Although auditory processing difficulties (and hearing loss) are acknowledged in the literature, clinical protocols typically do not specify routine assessment. The literature and the case example presented here suggest a need for further research in this area and a possible change in practice toward more routine assessment of auditory function post-stroke. PMID:27489401
Kawamata, Masaru; Kirino, Eiji; Inoue, Reiichi; Arai, Heii
The goal of this study was to explore the frontal-midline theta rhythm (Fm theta) generation mechanism employing event-related desynchronization/synchronization (ERD/ERS) analysis in relation to task-irrelevant external stimuli. A dual paradigm was employed: a videogame and the simultaneous presentation of passive auditory oddball stimuli. We analyzed the data concerning ERD/ERS using both Fast Fourier Transformation (FFT) and wavelet transform (WT). In the FFT data, during the periods with appearance of Fm theta, apparent ERD of the theta band was observed at Fz and Cz. ERD when Fm theta was present was much more prominent than when Fm theta was absent. In the WT data, as in the FFT data, ERD was seen again, but in this case the ERD was preceded by ERS during both the periods with and without Fm theta. Furthermore, the WT analysis indicated that ERD was followed by ERS during the periods without Fm theta. However, during Fm theta, no apparent ERS following ERD was seen. In our study, Fm theta was desynchronized by the auditory stimuli that were independent of the video game task used to evoke the Fm theta. The ERD of Fm theta might be reflecting the mechanism of "positive suppression" to process external auditory stimuli automatically and preventing attentional resources from being unnecessarily allocated to those stimuli. Another possibility is that Fm theta induced by our dual paradigm may reflect information processing modeled by multi-item working memory requirements for playing the videogame and the simultaneous auditory processing using a memory trace. ERS in the WT data without Fm theta might indicate further processing of the auditory information free from "positive suppression" control reflected by Fm theta. PMID:17993201
Ronconi, Luca; Pincham, Hannah L; Szűcs, Dénes; Facoetti, Andrea
Our ability to allocate attention at different moments in time can sometimes fail to select stimuli occurring in close succession, preventing visual information from reaching awareness. This so-called attentional blink (AB) occurs when the second of two targets (T2) is presented closely after the first (T1) in a rapid serial visual presentation (RSVP). We hypothesized that entrainment to a rhythmic stream of stimuli-before visual targets appear-would reduce the AB. Experiment 1 tested the effect of auditory entrainment by presenting sounds with a regular or irregular interstimulus interval prior to a RSVP where T1 and T2 were separated by three possible lags (1, 3 and 8). Experiment 2 examined visual entrainment by presenting visual stimuli in place of auditory stimuli. Results revealed that irrespective of sensory modality, arrhythmic stimuli preceding the RSVP triggered an alerting effect that improved the T2 identification at lag 1, but impaired the recovery from the AB at lag 8. Importantly, only auditory rhythmic entrainment was effective in reducing the AB at lag 3. Our findings demonstrate that manipulating the pre-stimulus condition can reduce deficits in temporal attention characterizing the human cognitive architecture, suggesting innovative trainings for acquired and neurodevelopmental disorders. PMID:26215434
Ellis, Robert J.; Zhiyan Duan; Ye Wang
"Moving to the beat" is both one of the most basic and one of the most profound means by which humans (and a few other species) interact with music. Computer algorithms that detect the precise temporal location of beats (i.e., pulses of musical "energy") in recorded music have important practical applications, such as the creation of playlists with a particular tempo for rehabilitation (e.g., rhythmic gait training), exercise (e.g., jogging), or entertainment (e.g., continuous dance mixes). A...
I Paya; Peel, D
Nonlinear models of deviations from PPP have recently provided an important, theoretically well motivated, contribution to the PPP puzzle. Most of these studies use temporally aggregated data to empirically estimate the nonlinear models. As noted by Taylor (2001), if the true DGP is nonlinear, the temporally aggregated data could exhibit misleading properties regarding the adjustment speeds. We examine the effects of different levels of temporal aggregation on estimates of ESTAR models of rea...
Hertz, Uri; Amedi, Amir
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli...
Eggermont, Jos J.; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg
Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that ...
Klein, David J; Simon, Jonathan Z; Depireux, Didier A; Shamma, Shihab A
The spectrotemporal receptive field (STRF) provides a versatile and integrated, spectral and temporal, functional characterization of single cells in primary auditory cortex (AI). In this paper, we explore the origin of, and relationship between, different ways of measuring and analyzing an STRF. We demonstrate that STRFs measured using a spectrotemporally diverse array of broadband stimuli-such as dynamic ripples, spectrotemporally white noise, and temporally orthogonal ripple combinations (TORCs)-are very similar, confirming earlier findings that the STRF is a robust linear descriptor of the cell. We also present a new deterministic analysis framework that employs the Fourier series to describe the spectrotemporal modulations contained in the stimuli and responses. Additional insights into the STRF measurements, including the nature and interpretation of measurement errors, is presented using the Fourier transform, coupled to singular-value decomposition (SVD), and variability analyses including bootstrap. The results promote the utility of the STRF as a core functional descriptor of neurons in AI. PMID:16518572
Jerger, James; Martin, Jeffrey; McColl, Roderick
In a previous publication (Jerger et al, 2002), we presented event-related potential (ERP) data on a pair of 10-year-old twin girls (Twins C and E), one of whom (Twin E) showed strong evidence of auditory processing disorder. For the present paper, we analyzed cross-correlation functions of ERP waveforms generated in response to the presentation of target stimuli to either the right or left ears in a dichotic paradigm. There were four conditions; three involved the processing of real words for either phonemic, semantic, or spectral targets; one involved the processing of a nonword acoustic signal. Marked differences in the cross-correlation functions were observed. In the case of Twin C, cross-correlation functions were uniformly normal across both hemispheres. The functions for Twin E, however, suggest poorly correlated neural activity over the left parietal region during the three word processing conditions, and over the right parietal area in the nonword acoustic condition. Differences between the twins' brains were evaluated using diffusion tensor magnetic resonance imaging (DTI). For Twin E, results showed reduced anisotropy over the length of the midline corpus callosum and adjacent lateral structures, implying reduced myelin integrity. Taken together, these findings suggest that failure to achieve appropriate temporally correlated bihemispheric brain activity in response to auditory stimulation, perhaps as a result of faulty interhemispheric communication via corpus callosum, may be a factor in at least some children with auditory processing disorder. PMID:15030103
Stevenson, Ryan A.; Wilson, Magdalena M.; Powers, Albert R.; Wallace, Mark T.
The importance of multisensory integration for human behavior and perception is well documented, as is the impact that temporal synchrony has on driving such integration. Thus, the more temporally coincident two sensory inputs from different modalities are, the more likely they will be perceptually bound. This temporal integration process is captured by the construct of the temporal binding window - the range of temporal offsets within which an individual is able to perceptually bind inputs across sensory modalities. Recent work has shown that this window is malleable, and can be narrowed via a multisensory perceptual feedback training process. In the current study, we seek to extend this by examining the malleability of the multisensory temporal binding window through changes in unisensory experience. Specifically, we measured the ability of visual perceptual feedback training to induce changes in the multisensory temporal binding window. Visual perceptual training with feedback successfully improved temporal visual processing and more importantly, this visual training increased the temporal precision across modalities, which manifested as a narrowing of the multisensory temporal binding window. These results are the first to establish the ability of unisensory temporal training to modulate multisensory temporal processes, findings that can provide mechanistic insights into multisensory integration and which may have a host of practical applications. PMID:23307155
Lu, Xuejing; Ho, Hao T; Sun, Yanan; Johnson, Blake W; Thompson, William F
While most normal hearing individuals can readily use prosodic information in spoken language to interpret the moods and feelings of conversational partners, people with congenital amusia report that they often rely more on facial expressions and gestures, a strategy that may compensate for deficits in auditory processing. In this investigation, we used EEG to examine the extent to which individuals with congenital amusia draw upon visual information when making auditory or audio-visual judgments. Event-related potentials (ERP) were elicited by a change in pitch (up or down) between two sequential tones paired with a change in spatial position (up or down) between two visually presented dots. The change in dot position was either congruent or incongruent with the change in pitch. Participants were asked to judge (1) the direction of pitch change while ignoring the visual information (AV implicit task), and (2) whether the auditory and visual changes were congruent (AV explicit task). In the AV implicit task, amusic participants performed significantly worse in the incongruent condition than control participants. ERPs showed an enhanced N2-P3 response to incongruent AV pairings for control participants, but not for amusic participants. However when participants were explicitly directed to detect AV congruency, both groups exhibited enhanced N2-P3 responses to incongruent AV pairings. These findings indicate that amusics are capable of extracting information from both modalities in an AV task, but are biased to rely on visual information when it is available, presumably because they have learned that auditory information is unreliable. We conclude that amusic individuals implicitly draw upon visual information when judging auditory information, even though they have the capacity to explicitly recognize conflicts between these two sensory channels. PMID:27132045
Stefan Koelsch; Daniela Sammler
BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'). METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians) were investi...
Zhao, T. Christina; Kuhl, Patricia K.
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants’ neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants’ neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants’ neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants’ ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512
Zhao, T Christina; Kuhl, Patricia K
Individuals with music training in early childhood show enhanced processing of musical sounds, an effect that generalizes to speech processing. However, the conclusions drawn from previous studies are limited due to the possible confounds of predisposition and other factors affecting musicians and nonmusicians. We used a randomized design to test the effects of a laboratory-controlled music intervention on young infants' neural processing of music and speech. Nine-month-old infants were randomly assigned to music (intervention) or play (control) activities for 12 sessions. The intervention targeted temporal structure learning using triple meter in music (e.g., waltz), which is difficult for infants, and it incorporated key characteristics of typical infant music classes to maximize learning (e.g., multimodal, social, and repetitive experiences). Controls had similar multimodal, social, repetitive play, but without music. Upon completion, infants' neural processing of temporal structure was tested in both music (tones in triple meter) and speech (foreign syllable structure). Infants' neural processing was quantified by the mismatch response (MMR) measured with a traditional oddball paradigm using magnetoencephalography (MEG). The intervention group exhibited significantly larger MMRs in response to music temporal structure violations in both auditory and prefrontal cortical regions. Identical results were obtained for temporal structure changes in speech. The intervention thus enhanced temporal structure processing not only in music, but also in speech, at 9 mo of age. We argue that the intervention enhanced infants' ability to extract temporal structure information and to predict future events in time, a skill affecting both music and speech processing. PMID:27114512
Thomson, Jennifer M; Goswami, Usha
Potential links between the language and motor systems in the brain have long attracted the interest of developmental psychologists. In this paper, we investigate a link often observed (e.g., [Wolff, P.H., 2002. Timing precision and rhythm in developmental dyslexia. Reading and Writing, 15 (1), 179-206.] between motor tapping and written language skills. We measure rhythmic finger tapping (paced by a metronome beat versus unpaced) and motor dexterity, phonological and auditory processing in 10-year old children, some of whom had a diagnosis of developmental dyslexia. We report links between paced motor tapping, auditory rhythmic processing and written language development. Motor dexterity does not explain these relationships. In regression analyses, paced finger tapping explained unique variance in reading and spelling. An interpretation based on the importance of rhythmic timing for both motor skills and language development is proposed. PMID:18448317
Fatemeh Haresabadi; Tahereh Sima Shirazi
Background and Aim: Specific language impairment (SLI), one variety of developmental language disorder, has attracted much interest in recent decades. Much research has been conducted to discover why some children have a specific language impairment. So far, research has failed to identify a reason for this linguistic deficiency. Some researchers believe language disorder causes defects in phonological working memory and affects auditory processing speed. Therefore, this study reviews the res...
Yoder, Kathleen M; Phan, Mimi L; Lu, Kai; Vicario, David S
Songbirds learn individually unique songs through vocal imitation and use them in courtship and territorial displays. Previous work has identified a forebrain auditory area, the caudomedial nidopallium (NCM), that appears specialized for discriminating and remembering conspecific vocalizations. In zebra finches (ZFs), only males produce learned vocalizations, but both sexes process these and other signals. This study assessed sex differences in auditory processing by recording extracellular multiunit activity at multiple sites within NCM. Juvenile female ZFs (n = 46) were reared in individual isolation and artificially tutored with song. In adulthood, songs were played back to assess auditory responses, stimulus-specific adaptation, neural bias for conspecific song, and memory for the tutor's song, as well as recently heard songs. In a subset of females (n = 36), estradiol (E2) levels were manipulated to test the contribution of E2, known to be synthesized in the brain, to auditory responses. Untreated females (n = 10) showed significant differences in response magnitude and stimulus-specific adaptation compared to males reared in the same paradigm (n = 9). In hormone-manipulated females, E2 augmentation facilitated the memory for recently heard songs in adulthood, but neither E2 augmentation (n = 15) nor E2 synthesis blockade (n = 9) affected tutor song memory or the neural bias for conspecific song. The results demonstrate subtle sex differences in processing communication signals, and show that E2 levels in female songbirds can affect the memory for songs of potential suitors, thus contributing to the process of mate selection. The results also have potential relevance to clinical interventions that manipulate E2 in human patients. PMID:25220950
Hendrik eSantosa; Melissa Jiyoun Hong; Keum-Shik eHong
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing fourteen subjects to four different auditory environments: music segments only, noise segments only, music+noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distingui...
Santosa, Hendrik; Hong, Melissa Jiyoun; Hong, Keum-Shik
The present study is to determine the effects of background noise on the hemispheric lateralization in music processing by exposing 14 subjects to four different auditory environments: music segments only, noise segments only, music + noise segments, and the entire music interfered by noise segments. The hemodynamic responses in both hemispheres caused by the perception of music in 10 different conditions were measured using functional near-infrared spectroscopy. As a feature to distinguish s...
Paulina C. Murphy-Ruiz; Yolanda R. Penaloza-Lopez; Felipe Garcia-Pedroza; Adrian Poblano
Objective We hypothesized that if the right hemisphere auditory processing abilities can be altered in children with developmental dyslexia (DD), we can detect dysfunction using specific tests. Method We performed an analytical comparative cross-sectional study. We studied 20 right-handed children with DD and 20 healthy right-handed control subjects (CS). Children in both groups were age, gender, and school-grade matched. Focusing on the right hemisphere’s contribution, we utilized tests to...
Woolley, Sarah M. N.; Moore, Jordan M.
Communication is a strong selective pressure on brain evolution because the exchange of information between individuals is crucial for fitness-related behaviors, such as mating. Given the importance of communication, the brains of signal senders and receivers are likely to be functionally coordinated. We study vocal behavior and auditory processing in multiple species of estrildid finches with the goal of understanding how species identity and early experience interact to shape the neural sys...
Tuomainen, O. T.
This thesis investigates auditory and speech processing in Specific Language Impairment (SLI) and dyslexia. One influential theory of SLI and dyslexia postulates that both SLI and dyslexia stem from similar underlying sensory deficit that impacts speech perception and phonological development leading to oral language and literacy deficits. Previous studies, however, have shown that these underlying sensory deficits exist in only a subgroup of language impaired individuals, and ...
Kling, A S; Lloyd, R L; Perryman, K M
Radiotelemetry of slow wave activity of the amygdala was recorded under a variety of conditions. Power, and the percentage of power in the delta band, increased in response to stimulation. Recordings of monkey vocalizations and slides of ethologically relevant, natural objects produced a greater increase in power than did control stimuli. The responses to auditory stimuli increased when these stimuli were presented in an unrestrained, group setting, yet the responses to the vocalizations remained greater than those following control stimuli. Both the natural auditory and visual stimuli produced a reliable hierarchy with regard to the magnitude of response. Following lesions of inferior temporal cortex, these two hierarchies are disrupted, especially in the auditory domain. Further, these same stimuli, when presented after the lesion, produced a decrease, rather than an increase, in power. Nevertheless, the power recorded from the natural stimuli was still greater than that recorded from control stimuli in that the former produced less of a decrease in power, following the lesion, than did the latter. These data, in conjunction with a parallel report on evoked potentials in the amygdala, before and after cortical lesions, lead us to conclude that sensory information, particularly auditory, available to the amygdala, following the lesion, is substantially the same, and that it is the interpretation of this information, by the amygdala, which is altered by the cortical lesion. PMID:3566692
Li, Linfeng; Gong, Qin
The aim of the present study was to investigate both the encoding mechanism and the process of deviance detection when deviant stimuli were presented in various patterns in an environment featuring repetitive sounds. In adults with normal hearing, middle latency responses were recorded within an oddball paradigm containing complex tones or speech sounds, wherein deviant stimuli featured different change patterns. For both complex tones and speech sounds, the Na and Pa components of middle latency responses showed an increase in the mean amplitude and a reduction in latency when comparing rare deviant stimuli with repetitive standard stimuli in a stimulation block. However, deviant stimuli with a rising frequency induced signals with smaller amplitudes than other deviant stimuli. The present findings indicate that deviant stimuli with different change patterns induce differing responses in the primary auditory cortex. In addition, the Pa components of speech sounds typically feature a longer latency and similar mean amplitude compared with complex tones, which suggests that the auditory system requires more complex processing for the analysis of speech sounds before processing in the auditory cortex. PMID:27203294
Full Text Available BACKGROUND: Music-syntactic irregularities often co-occur with the processing of physical irregularities. In this study we constructed chord-sequences such that perceived differences in the cognitive processing between regular and irregular chords could not be due to the sensory processing of acoustic factors like pitch repetition or pitch commonality (the major component of 'sensory dissonance'. METHODOLOGY/PRINCIPAL FINDINGS: Two groups of subjects (musicians and nonmusicians were investigated with electroencephalography (EEG. Irregular chords elicited an early right anterior negativity (ERAN in the event-related brain potentials (ERPs. The ERAN had a latency of around 180 ms after the onset of the music-syntactically irregular chords, and had maximum amplitude values over right anterior electrode sites. CONCLUSIONS/SIGNIFICANCE: Because irregular chords were hardly detectable based on acoustical factors (such as pitch repetition and sensory dissonance, this ERAN effect reflects for the most part cognitive (not sensory components of regularity-based, music-syntactic processing. Our study represents a methodological advance compared to previous ERP-studies investigating the neural processing of music-syntactically irregular chords.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Donohue, Sarah E.; Woldorff, Marty G.; Mitroff, Stephen R.
Recent research has demonstrated enhanced visual attention and visual perception in individuals with extensive experience playing action video games. These benefits manifest in several realms, but much remains unknown about the ways in which video game experience alters perception and cognition. The current study examined whether video game players’ benefits generalize beyond vision to multisensory processing by presenting video game players and non-video game players auditory and visual stim...