Kozlov, Andrei S; Gentner, Timothy Q
High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes. PMID:26787894
Full Text Available We present a theory by which idealized models of auditory receptive fields can be derived in a principled axiomatic manner, from a set of structural properties to (i enable invariance of receptive field responses under natural sound transformations and (ii ensure internal consistency between spectro-temporal receptive fields at different temporal and spectral scales. For defining a time-frequency transformation of a purely temporal sound signal, it is shown that the framework allows for a new way of deriving the Gabor and Gammatone filters as well as a novel family of generalized Gammatone filters, with additional degrees of freedom to obtain different trade-offs between the spectral selectivity and the temporal delay of time-causal temporal window functions. When applied to the definition of a second-layer of receptive fields from a spectrogram, it is shown that the framework leads to two canonical families of spectro-temporal receptive fields, in terms of spectro-temporal derivatives of either spectro-temporal Gaussian kernels for non-causal time or a cascade of time-causal first-order integrators over the temporal domain and a Gaussian filter over the logspectral domain. For each filter family, the spectro-temporal receptive fields can be either separable over the time-frequency domain or be adapted to local glissando transformations that represent variations in logarithmic frequencies over time. Within each domain of either non-causal or time-causal time, these receptive field families are derived by uniqueness from the assumptions. It is demonstrated how the presented framework allows for computation of basic auditory features for audio processing and that it leads to predictions about auditory receptive fields with good qualitative similarity to biological receptive fields measured in the inferior colliculus (ICC and primary auditory cortex (A1 of mammals.
Thorson, Ivar L; Liénard, Jean; David, Stephen V
Encoding properties of sensory neurons are commonly modeled using linear finite impulse response (FIR) filters. For the auditory system, the FIR filter is instantiated in the spectro-temporal receptive field (STRF), often in the framework of the generalized linear model. Despite widespread use of the FIR STRF, numerous formulations for linear filters are possible that require many fewer parameters, potentially permitting more efficient and accurate model estimates. To explore these alternative STRF architectures, we recorded single-unit neural activity from auditory cortex of awake ferrets during presentation of natural sound stimuli. We compared performance of > 1000 linear STRF architectures, evaluating their ability to predict neural responses to a novel natural stimulus. Many were able to outperform the FIR filter. Two basic constraints on the architecture lead to the improved performance: (1) factorization of the STRF matrix into a small number of spectral and temporal filters and (2) low-dimensional parameterization of the factorized filters. The best parameterized model was able to outperform the full FIR filter in both primary and secondary auditory cortex, despite requiring fewer than 30 parameters, about 10% of the number required by the FIR filter. After accounting for noise from finite data sampling, these STRFs were able to explain an average of 40% of A1 response variance. The simpler models permitted more straightforward interpretation of sensory tuning properties. They also showed greater benefit from incorporating nonlinear terms, such as short term plasticity, that provide theoretical advances over the linear model. Architectures that minimize parameter count while maintaining maximum predictive power provide insight into the essential degrees of freedom governing auditory cortical function. They also maximize statistical power available for characterizing additional nonlinear properties that limit current auditory models. PMID:26683490
Schneider, David M.; Woolley, Sarah M. N.
The receptive fields of many sensory neurons are sensitive to statistical differences among classes of complex stimuli. For example, excitatory spectral bandwidths of midbrain auditory neurons and the spatial extent of cortical visual neurons differ during the processing of natural stimuli compared to the processing of artificial stimuli. Experimentally characterizing neuronal non-linearities that contribute to stimulus-dependent receptive fields is important for understanding how neurons res...
Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...
Monica eGori; Tiziana eVercillo; Giulio eSandini; David eBurr
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds b...
YANG Wenwei; GAO Lixia; SUN Xinde
Using conventional electrophysiological technique, we investigated the plasticity of the frequency receptive fields (RF) of auditory cortex (AC) neurons in rats. In the AC, when the frequency difference between conditioning stimulus frequency (CSF) and the best frequency (BF) was in the range of 1-4 kHz, the frequency RF of AC neurons shifted. The smaller the differences between CSF and BF, the higher the probability of the RF shift and the greater the degree of the RF shift. To some extent, the plasticity of RF was dependent on the duration of the session of conditioning stimulus (CS). When the frequency difference between CSF and BF was bigger, the duration of the CS session needed to induce the plasticity was longer. The recovery time course of the frequency RF showed opposite changes after CS cessation.The RF shift could be induced by the frequency that was either higher or lower than the control BF, demonstrating no clear directional preference. The frequency RF of some neurons showed bidirectional shift, and the RF of other neurons showed single directional shift. The results suggest that the frequency RF plasticity of AC neurons could be considered as an ideal model for studying plasticity mechanism. The present study also provides important evidence for further study of learning and memory in auditory system.
Bizley, Jennifer K.; King, Andrew J
Neurons responsive to visual stimulation have now been described in the auditory cortex of various species, but their functions are largely unknown. Here we investigate the auditory and visual spatial sensitivity of neurons recorded in 5 different primary and non-primary auditory cortical areas of the ferret. We quantified the spatial tuning of neurons by measuring the responses to stimuli presented across a range of azimuthal positions and calculating the mutual information (MI) between the ...
Holt, Marla M.
Given the biological importance of sound for a variety of activities, pinnipeds must be able to obtain spatial information about their surroundings thorough acoustic input in the absence of other sensory cues. The three chapters of this dissertation address spatial auditory processing capabilities of pinnipeds in air given that these amphibious animals use acoustic signals for reproduction and survival on land. Two chapters are comparative lab-based studies that utilized psychophysical approaches conducted in an acoustic chamber. Chapter 1 addressed the frequency-dependent sound localization abilities at azimuth of three pinniped species (the harbor seal, Phoca vitulina, the California sea lion, Zalophus californianus, and the northern elephant seal, Mirounga angustirostris). While performances of the sea lion and harbor seal were consistent with the duplex theory of sound localization, the elephant seal, a low-frequency hearing specialist, showed a decreased ability to localize the highest frequencies tested. In Chapter 2 spatial release from masking (SRM), which occurs when a signal and masker are spatially separated resulting in improvement in signal detectability relative to conditions in which they are co-located, was determined in a harbor seal and sea lion. Absolute and masked thresholds were measured at three frequencies and azimuths to determine the detection advantages afforded by this type of spatial auditory processing. Results showed that hearing sensitivity was enhanced by up to 19 and 12 dB in the harbor seal and sea lion, respectively, when the signal and masker were spatially separated. Chapter 3 was a field-based study that quantified both sender and receiver variables of the directional properties of male northern elephant seal calls produce within communication system that serves to delineate dominance status. This included measuring call directivity patterns, observing male-male vocally-mediated interactions, and an acoustic playback study
Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality. PMID:25368587
Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
BRIMIJOIN, W. OWEN; O'Neill, William E.
Linear measures of auditory receptive fields do not always fully account for a neuron's response to spectrotemporally-complex signals such as frequency-modulated sweeps (FM) and communication sounds. A possible source of this discrepancy is cross-frequency interactions, common response properties which may be missed by linear receptive fields but captured using two-tone masking. Using a patterned tonal sequence that included a balanced set of all possible tone-to-tone transitions, we have her...
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Britvina, T; Eggermont, J J
It was often thought that synchronized rhythmic epochs of spindle waves disconnect thalamo-cortical system from incoming sensory signals. The present study addresses this issue by simultaneous extracellular action potential and local field potential (LFP) recordings from primary auditory cortex of ketamine-anesthetized cats during spindling activity. We compared cortical spectrotemporal receptive fields (STRF) obtained during spindling and non-spindling epochs. The basic spectro-temporal parameters of "spindling" and "non-spindling" STRFs were similar. However, the peak-firing rate at the best frequency was significantly enhanced during spindling epochs. This enhancement was mainly caused by the increased probability of a stimulus to evoke spikes (effectiveness of stimuli) during spindling as compared with non-spindling epochs. Augmented LFPs associated with effective stimuli and increased single-unit pair correlations during spindling epochs suggested higher synchrony of thalamo-cortical inputs during spindling that resulted in increased effectiveness of stimuli presented during spindling activity. The neuronal firing rate, both stimulus-driven and spontaneous, was higher during spindling as compared with non-spindling epochs. Overall, our results suggests that thalamic cells during spindling respond to incoming stimuli-related inputs and, moreover, cause more powerful stimulus-related or spontaneous activation of the cortex. PMID:18515012
Roberts, Katherine L.; Summerfield, A. Quentin; Hall, Deborah A.
The spatial relevance hypothesis (J. J. McDonald & L. M. Ward, 1999) proposes that covert auditory spatial orienting can only be beneficial to auditory processing when task stimuli are encoded spatially. We present a series of experiments that evaluate 2 key aspects of the hypothesis: (a) that "reflexive activation of location-sensitive neurons is…
Cappagli, Giulia; Gori, Monica
For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. PMID:27002960
David R Wozny
Full Text Available Recent research investigating the principles governing human perception has provided increasing evidence for probabilistic inference in human perception. For example, human auditory and visual localization judgments closely resemble that of a Bayesian causal inference observer, where the underlying causal structure of the stimuli are inferred based on both the available sensory evidence and prior knowledge. However, most previous studies have focused on characterization of perceptual inference within a static environment, and therefore, little is known about how this inference process changes when observers are exposed to a new environment. In this study we aimed to computationally characterize the change in auditory spatial perception induced by repeated auditory-visual spatial conflict, known as the Ventriloquist Aftereffect. In theory, this change could reflect a shift in the auditory sensory representations (i.e., shift in auditory likelihood distribution, a decrease in the precision of the auditory estimates (i.e., increase in spread of likelihood distribution, a shift in the auditory bias (i.e., shift in prior distribution, or an increase/decrease in strength of the auditory bias (i.e., the spread of prior distribution, or a combination of these. By quantitatively estimating the parameters of the perceptual process for each individual observer using a Bayesian causal inference model, we found that the shift in the perceived locations after exposure was associated with a shift in the mean of the auditory likelihood functions in the direction of the experienced visual offset. The results suggest that repeated exposure to a fixed auditory-visual discrepancy is attributed by the nervous system to sensory representation error and as a result, the sensory map of space is recalibrated to correct the error.
Recanzone, Gregg H.
The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel "what" and "where" processing by the primate visual cortex. If "where" information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.
The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear molds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days) performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localization, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear molds or through virtual auditory space stimulation using non-individualized spectral cues. The work with ear molds demonstrates that a relatively short period of training involving audio-motor feedback (5-10 days) significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide spatial cues but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis. PMID:25147497
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Papadopoulos, Konstantinos; Papadimitriou, Kimon; Koutsoklenis, Athanasios
The study presented here sought to explore the role of auditory cues in the spatial knowledge of blind individuals by examining the relation between the perceived auditory cues and the landscape of a given area and by investigating how blind individuals use auditory cues to create cognitive maps. The findings reveal that several auditory cues…
Parks, Anthony J.
How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.
Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard
The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development. PMID:22807283
Paltoglou, Aspasia Eleni
Selective attention is a crucial function that encompasses all perceptual modalities and which enables us to focus on the behaviorally relevant information and ignore the rest. The main goal of the thesis is to test well-established hypotheses about the mechanisms of visual selective attention in the auditory domain using behavioral and neuroimaging methods. Two fMRI studies (Experiments 1 and 2) test the hypothesis of feature-specific attentional enhancement. This hypothesis states that ...
Parise, Cesare V; Knorre, Katharina; Ernst, Marc O
Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing. PMID:24711409
Kean, Matthew; Crawford, Trevor J
We investigated exogenous and endogenous orienting of visual attention to the spatial loca-tion of an auditory cue. In Experiment 1, significantly faster saccades were observed to vis-ual targets appearing ipsilateral, compared to contralateral, to the peripherally-presented cue. This advantage was greatest in an 80% target-at-cue (TAC) condition but equivalent in 20% and 50% TAC conditions. In Experiment 2, participants maintained central fixation while making an elevation judgment of the pe...
Kanai, Kenichi; Ikeda, Kazuo; Tayama, Tadayuki
This study investigated the effect of exogenous spatial attention on auditory information processing. In Experiments 1, 2 and 3, temporal order judgment tasks were performed to examine the effect. In Experiment 1 and 2, a cue tone was presented to either the left or right ear, followed by sequential presentation of two target tones. The subjects judged the order of presentation of the target tones. The results showed that subjects heard both tones simultaneously when the target tone, which wa...
Eggermont, Jos J.; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg
Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that ...
Razavi, Babak; O'Neill, William E; Paige, Gary D
Audition and vision both form spatial maps of the environment in the brain, and their congruency requires alignment and calibration. Because audition is referenced to the head and vision is referenced to movable eyes, the brain must accurately account for eye position to maintain alignment between the two modalities as well as perceptual space constancy. Changes in eye position are known to variably, but inconsistently, shift sound localization, suggesting subtle shortcomings in the accuracy or use of eye position signals. We systematically and directly quantified sound localization across a broad spatial range and over time after changes in eye position. A sustained fixation task addressed the spatial (steady-state) attributes of eye position-dependent effects on sound localization. Subjects continuously fixated visual reference spots straight ahead (center), to the left (20 degrees), or to the right (20 degrees) of the midline in separate sessions while localizing auditory targets using a laser pointer guided by peripheral vision. An alternating fixation task focused on the temporal (dynamic) aspects of auditory spatial shifts after changes in eye position. Localization proceeded as in sustained fixation, except that eye position alternated between the three fixation references over multiple epochs, each lasting minutes. Auditory space shifted by approximately 40% toward the new eye position and dynamically over several minutes. We propose that this spatial shift reflects an adaptation mechanism for aligning the "straight-ahead" of perceived sensory-motor maps, particularly during early childhood when normal ocular alignment is achieved, but also resolving challenges to normal spatial perception throughout life. PMID:17881531
Wu, C-T; Weissman, D.H.; Roberts, K. C.; Woldorff, M.G.
Although a fronto-parietal network has consistently been implicated in the control of visual spatial attention, the network that guides spatial attention in the auditory domain is not yet clearly understood. To investigate this issue, we measured brain activity using functional magnetic resonance imaging while participants performed a cued auditory spatial attention task. We found that cued orienting of auditory spatial attention activated a medial-superior distributed fronto-parietal network...
Constance May Bainbridge
Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual
Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.
Butcher, Andrew; Govenlock, Stanley W; Tata, Matthew S
Scene analysis involves the process of segmenting a field of overlapping objects from each other and from the background. It is a fundamental stage of perception in both vision and hearing. The auditory system encodes complex cues that allow listeners to find boundaries between sequential objects, even when no gap of silence exists between them. In this sense, object perception in hearing is similar to perceiving visual objects defined by isoluminant color, motion or binocular disparity. Motion is one such cue: when a moving sound abruptly disappears from one location and instantly reappears somewhere else, the listener perceives two sequential auditory objects. Smooth reversals of motion direction do not produce this segmentation. We investigated the brain electrical responses evoked by this spatial segmentation cue and compared them to the familiar auditory evoked potential elicited by sound onsets. Segmentation events evoke a pattern of negative and positive deflections that are unlike those evoked by onsets. We identified a negative component in the waveform - the Lateralized Object-Related Negativity - generated by the hemisphere contralateral to the side on which the new sound appears. The relationship between this component and similar components found in related paradigms is considered. PMID:21056097
Michalka, Samantha W; Rosen, Maya L; Kong, Lingqiang; Shinn-Cunningham, Barbara G; Somers, David C
Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during auditory spatial attention and working memory tasks, but prior work has not demonstrated that auditory activation occurs within visual spatial maps in parietal cortex. Here, we report both cognitive and anatomical distinctions in the auditory recruitment of visuotopically mapped regions within the superior parietal lobule. An auditory spatial STM task recruited anterior visuotopic maps (IPS2-4, SPL1), but an auditory temporal STM task with equivalent stimuli failed to drive these regions significantly. Behavioral and eye-tracking measures rule out task difficulty and eye movement explanations. Neither auditory task recruited posterior regions IPS0 or IPS1, which appear to be exclusively visual. These findings support the hypothesis of multisensory spatial processing in the anterior, but not posterior, superior parietal lobule and demonstrate that recruitment of these maps depends on auditory task demands. PMID:26656996
Full Text Available The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.
Michalka, Samantha W.; Rosen, Maya L.; Kong, Lingqiang; Shinn-Cunningham, Barbara G.; Somers, David C.
Audition and vision both convey spatial information about the environment, but much less is known about mechanisms of auditory spatial cognition than visual spatial cognition. Human cortex contains >20 visuospatial map representations but no reported auditory spatial maps. The intraparietal sulcus (IPS) contains several of these visuospatial maps, which support visuospatial attention and short-term memory (STM). Neuroimaging studies also demonstrate that parietal cortex is activated during au...
Kenet, T; Froemke, R. C.; Schreiner, C. E.; Pessah, I N; Merzenich, M. M.
Noncoplanar polychlorinated biphenyls (PCBs) are widely dispersed in human environment and tissues. Here, an exemplar noncoplanar PCB was fed to rat dams during gestation and throughout three subsequent nursing weeks. Although the hearing sensitivity and brainstem auditory responses of pups were normal, exposure resulted in the abnormal development of the primary auditory cortex (A1). A1 was irregularly shaped and marked by internal nonresponsive zones, its topographic organization was grossl...
Chang, Moonjeong; Nishikawa, Nozomu; Cai, Zhenyu; Makino, Shoji; Rutkowski, Tomasz M.
The paper presents a pilot study conducted with spatial visual, audiovisual and auditory brain-computer-interface (BCI) based speller paradigms. The psychophysical experiments are conducted with healthy subjects in order to evaluate a difficulty and a possible response accuracy variability. We also present preliminary EEG results in offline BCI mode. The obtained results validate a thesis, that spatial auditory only paradigm performs as good as the traditional visual and audiovisual speller B...
Liu, Lu; She, Liang; Chen, Ming; Liu, Tianyi; Lu, Haidong D; Dan, Yang; Poo, Mu-ming
Visual processing depends critically on the receptive field (RF) properties of visual neurons. However, comprehensive characterization of RFs beyond the primary visual cortex (V1) remains a challenge. Here we report fine RF structures in secondary visual cortex (V2) of awake macaque monkeys, identified through a projection pursuit regression analysis of neuronal responses to natural images. We found that V2 RFs could be broadly classified as V1-like (typical Gabor-shaped subunits), ultralong (subunits with high aspect ratios), or complex-shaped (subunits with multiple oriented components). Furthermore, single-unit recordings from functional domains identified by intrinsic optical imaging showed that neurons with ultralong RFs were primarily localized within pale stripes, whereas neurons with complex-shaped RFs were more concentrated in thin stripes. Thus, by combining single-unit recording with optical imaging and a computational approach, we identified RF subunits underlying spatial feature selectivity of V2 neurons and demonstrated the functional organization of these RF properties. PMID:26839410
Goldsworthy, Raymond L
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60=0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. PMID:25330772
Cui, Qi N; Razavi, Babak; O'Neill, William E; Paige, Gary D
Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (high-pass and low-pass noise) related to specific auditory spatial channels, 3) matched by a shift in the perceived straight-ahead (PSA), and 4) accompanied by a spatial shift for visual and/or bimodal (visual and auditory) targets. Subjects were tested in a dark echo-attenuated chamber with their heads fixed facing a cylindrical screen, behind which a mobile speaker/LED presented targets across the frontal field. Subjects fixated alternating reference spots (0, +/-20 degrees ) horizontally or vertically while either localizing targets or indicating PSA using a laser pointer. Results showed that the spatial shift induced by ocular eccentricity is 1) preserved for auditory targets without a visual fixation reference, 2) generalized for all frequency bands, and thus all auditory spatial channels, 3) paralleled by a shift in PSA, and 4) restricted to auditory space. Findings are consistent with a set-point control strategy by which eye position governs multimodal spatial alignment. The phenomenon is robust for auditory space and egocentric perception, and highlights the importance of controlling for eye position in the examination of spatial perception and behavior. PMID:19846626
Full Text Available Previous imaging studies on the brain mechanisms of spatial hearing have mainly focused on sounds varying in the horizontal plane. In this study, we compared activations in human auditory cortex (AC and adjacent inferior parietal lobule (IPL to sounds varying in horizontal location, distance, or space (i.e., different rooms. In order to investigate both stimulus-dependent and task-dependent activations, these sounds were presented during visual discrimination, auditory discrimination, and auditory 2-back memory tasks. Consistent with previous studies, activations in AC were modulated by the auditory tasks. During both auditory and visual tasks, activations in AC were stronger to sounds varying in horizontal location than along other feature dimensions. However, in IPL, this enhancement was detected only during auditory tasks. Based on these results, we argue that IPL is not primarily involved in stimulus-level spatial analysis but that it may represent such information for more general processing when relevant to an active auditory task.
Juarez-Salinas, Dina L.; Engle, James R.; Navarro, Xochi O.; Gregg H Recanzone
The compromised abilities to localize sounds and to understand speech are two hallmark deficits in aged individuals. The auditory cortex is necessary for these processes, yet we know little about how normal aging affects these early cortical fields. In this study, we recorded the spatial tuning of single neurons in primary (area A1) and secondary (area CL) auditory cortical areas in young and aged alert rhesus macaques. We found that the neurons of aged animals had greater spontaneous and dri...
Goldsworthy, Raymond L.; Delhorne, Lorraine A.; Desloge, Joseph G.; Braida, Louis D.
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials un...
Sparreboom, Marloes; Langereis, Margreet C; Snik, Ad F M; Mylanus, Emmanuel A M
Sequential bilateral cochlear implantation in profoundly deaf children often leads to primary advantages in spatial hearing and speech recognition. It is not yet known how these children develop in the long-term and if these primary advantages will also lead to secondary advantages, e.g. in better language skills. The aim of the present longitudinal cohort study was to assess the long-term effects of sequential bilateral cochlear implantation in children on spatial hearing, speech recognition in quiet and in noise and receptive vocabulary. Twenty-four children with bilateral cochlear implants (BiCIs) were tested 5-6 years after sequential bilateral cochlear implantation. These children received their second implant between 2.4 and 8.5 years of age. Speech and language data were also gathered in a matched reference group of 26 children with a unilateral cochlear implant (UCI). Spatial hearing was assessed with a minimum audible angle (MAA) task with different stimulus types to gain global insight into the effective use of interaural level difference (ILD) and interaural timing difference (ITD) cues. In the long-term, children still showed improvements in spatial acuity. Spatial acuity was highest for ILD cues compared to ITD cues. For speech recognition in quiet and noise, and receptive vocabulary, children with BiCIs had significant higher scores than children with a UCI. Results also indicate that attending a mainstream school has a significant positive effect on speech recognition and receptive vocabulary compared to attending a school for the deaf. Despite of a period of unilateral deafness, children with BiCIs, participating in mainstream education obtained age-appropriate language scores. PMID:25462493
Goldsworthy, Raymond L; Delhorne, Lorraine A; Desloge, Joseph G; Braida, Louis D
This article introduces and provides an assessment of a spatial-filtering algorithm based on two closely-spaced (∼1 cm) microphones in a behind-the-ear shell. The evaluated spatial-filtering algorithm used fast (∼10 ms) temporal-spectral analysis to determine the location of incoming sounds and to enhance sounds arriving from straight ahead of the listener. Speech reception thresholds (SRTs) were measured for eight cochlear implant (CI) users using consonant and vowel materials under three processing conditions: An omni-directional response, a dipole-directional response, and the spatial-filtering algorithm. The background noise condition used three simultaneous time-reversed speech signals as interferers located at 90°, 180°, and 270°. Results indicated that the spatial-filtering algorithm can provide speech reception benefits of 5.8 to 10.7 dB SRT compared to an omni-directional response in a reverberant room with multiple noise sources. Given the observed SRT benefits, coupled with an efficient design, the proposed algorithm is promising as a CI noise-reduction solution. PMID:25096120
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present
Sodnik, Jaka; Jakus, Grega; Tomazic, Saso
Introduction: This article reports on a study that explored the benefits and drawbacks of using spatially positioned synthesized speech in auditory interfaces for computer users who are visually impaired (that is, are blind or have low vision). The study was a practical application of such systems--an enhanced word processing application compared…
of two such cues on speech intelligibility was studied. First, the benefit from early reflections (ER’s) in a room was determined using a virtual auditory environment. ER’s were found to be useful for speech intelligibility, but to a smaller extent than the direct sound (DS). The benefit was...... intelligibility, the exact ILD information is not crucial. The results from an additional experiment demonstrated that the ER benefit was maintained with independent as well as with linked hearing aid compression. Overall, this work contributes to the understanding of ER processing in listeners with normal and...... quantified with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not...
Cui, Qi N; Razavi, Babak; O'Neill, William E.; Paige, Gary D.
Vision and audition represent the outside world in spatial synergy that is crucial for guiding natural activities. Input conveying eye-in-head position is needed to maintain spatial congruence because the eyes move in the head while the ears remain head-fixed. Recently, we reported that the human perception of auditory space shifts with changes in eye position. In this study, we examined whether this phenomenon is 1) dependent on a visual fixation reference, 2) selective for frequency bands (...
Jos J Eggermont
Full Text Available Multi-electrode array recordings of spike and local field potential (LFP activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs and 492 frequency-tuning curves (FTCs based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm and the 16-40 Hz LFP (7.4 mm, whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz LFP-pair correlations showed that about 16% (9% of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex.
Eggermont, Jos J; Munguia, Raymundo; Pienkowski, Martin; Shaw, Greg
Multi-electrode array recordings of spike and local field potential (LFP) activity were made from primary auditory cortex of 12 normal hearing, ketamine-anesthetized cats. We evaluated 259 spectro-temporal receptive fields (STRFs) and 492 frequency-tuning curves (FTCs) based on LFPs and spikes simultaneously recorded on the same electrode. We compared their characteristic frequency (CF) gradients and their cross-correlation distances. The CF gradient for spike-based FTCs was about twice that for 2-40 Hz-filtered LFP-based FTCs, indicating greatly reduced frequency selectivity for LFPs. We also present comparisons for LFPs band-pass filtered between 4-8 Hz, 8-16 Hz and 16-40 Hz, with spike-based STRFs, on the basis of their marginal frequency distributions. We find on average a significantly larger correlation between the spike based marginal frequency distributions and those based on the 16-40 Hz filtered LFP, compared to those based on the 4-8 Hz, 8-16 Hz and 2-40 Hz filtered LFP. This suggests greater frequency specificity for the 16-40 Hz LFPs compared to those of lower frequency content. For spontaneous LFP and spike activity we evaluated 1373 pair correlations for pairs with >200 spikes in 900 s per electrode. Peak correlation-coefficient space constants were similar for the 2-40 Hz filtered LFP (5.5 mm) and the 16-40 Hz LFP (7.4 mm), whereas for spike-pair correlations it was about half that, at 3.2 mm. Comparing spike-pairs with 2-40 Hz (and 16-40 Hz) LFP-pair correlations showed that about 16% (9%) of the variance in the spike-pair correlations could be explained from LFP-pair correlations recorded on the same electrodes within the same electrode array. This larger correlation distance combined with the reduced CF gradient and much broader frequency selectivity suggests that LFPs are not a substitute for spike activity in primary auditory cortex. PMID:21625385
Cohen, Annabel J.; Lamothe, M. J. Reina; Toms, Ian D.; Fleming, Richard A. G.
Cohen, Lamothe, Fleming, MacIsaac, and Lamoureux [J. Acoust. Soc. Am. 109, 2460 (2001)] reported that proximity governed circular direction judgments (clockwise/counterclockwise) of two successive tones emanating from all pairs of 12 speakers located at 30-degree intervals around a listeners' head (cranium). Many listeners appeared to experience systematic front-back confusion. Diametrically opposed locations (180-degrees-theoretically ambiguous direction) produced a direction bias pattern resembling Deutsch's tritone paradox [Deutsch, Kuyper, and Fisher, Music Percept. 5, 7992 (1987)]. In Experiment 1 of the present study, the circular direction task was conducted in the tactile domain using 12 circumcranial points of vibration. For all 5 participants, proximity governed direction (without front-back confusion) and a simple clockwise bias was shown for 180-degree pairs. Experiment 2 tested 9 new participants in one unimodal auditory condition and two bimodal auditory-tactile conditions (spatially-correlated/spatially-uncorrelated). Correlated auditory-tactile information eliminated front-back confusion for 8 participants and replaced the ``paradoxical'' bias for 180-degree pairs with the clockwise bias. Thus, spatially correlated audio-tactile location information improves the veridical representation of 360-degree acoustic space, and modality-specific principles are implicated by the unique circular direction bias patterns for 180-degree pairs in the separate auditory and tactile modalities. [Work supported by NSERC.
Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Michael J Boivin; Paul Bangirana; Rebecca C Shaffer
BACKGROUND: Using the Kaufman Assessment Battery for Children (K-ABC) Conant et al. (1999) observed that visual and auditory working memory (WM) span were independent in both younger and older children from DR Congo, but related in older American children and in Lao children. The present study evaluated whether visual and auditory WM span were independent in Ugandan and Senegalese children. METHOD: In a linear regression analysis we used visual (Spatial Memory, Hand Movements) and auditory (N...
Huang, Minqiang; Daly, Ian; Jin, Jing; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej
Visual brain-computer interfaces (BCIs) are not suitable for people who cannot reliably maintain their eye gaze. Considering that this group usually maintains audition, an auditory based BCI may be a good choice for them. In this paper, we explore two auditory patterns: (1) a pattern utilizing symmetrical spatial cues with multiple frequency beeps [called the high low medium (HLM) pattern], and (2) a pattern utilizing non-symmetrical spatial cues with six tones derived from the diatonic scale [called the diatonic scale (DS) pattern]. These two patterns are compared to each other in terms of accuracy to determine which auditory pattern is better. The HLM pattern uses three different frequency beeps and has a symmetrical spatial distribution. The DS pattern uses six spoken stimuli, which are six notes solmizated as "do", "re", "mi", "fa", "sol" and "la", and derived from the diatonic scale. These six sounds are distributed to six, spatially distributed, speakers. Thus, we compare a BCI paradigm using beeps with another BCI paradigm using tones on the diatonic scale, when the stimuli are spatially distributed. Although no significant differences are found between the ERPs, the HLM pattern performs better than the DS pattern: the online accuracy achieved with the HLM pattern is significantly higher than that achieved with the DS pattern (p = 0.0028). PMID:27275376
The present thesis set out to investigate how sensory modality and spatial presentation influence visual and auditory duration judgments in the millisecond range. The effects of modality and spatial location were explored by considering right and left side presentations of mixed or blocked visual and auditory stimuli. Several studies have shown that perceived duration of a stimulus can be affected by various extra-temporal factors such as modality and spatial position. Audit...
Witten, Ilana B.; Phyllis F Knudsen; Knudsen, Eric I.
BACKGROUND: Barn owls integrate spatial information across frequency channels to localize sounds in space. METHODOLOGY/PRINCIPAL FINDINGS: We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz) from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the locatio...
Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M
This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p visual-spatial interference may occur in older subjects when vision is used to maintain posture. PMID:26410669
Grantham, D. Wesley; Hornsby, Benjamin W. Y.; Erpenbeck, Eric A.
Minimum audible angle (MAA) and minimum audible movement angle (MAMA) thresholds were measured for stimuli in horizontal, vertical, and diagonal (60°) planes. A pseudovirtual technique was employed in which signals were recorded through KEMAR's ears and played back to subjects through insert earphones. Thresholds were obtained for wideband, high-pass, and low-pass noises. Only 6 of 20 subjects obtained wideband vertical-plane MAAs less than 10°, and only these 6 subjects were retained for the complete study. For all three filter conditions thresholds were lowest in the horizontal plane, slightly (but significantly) higher in the diagonal plane, and highest for the vertical plane. These results were similar in magnitude and pattern to those reported by Perrott and Saberi [J. Acoust. Soc. Am. 87, 1728-1731 (1990)] and Saberi and Perrott [J. Acoust. Soc. Am. 88, 2639-2644 (1990)], except that these investigators generally found that thresholds for diagonal planes were as good as those for the horizontal plane. The present results are consistent with the hypothesis that diagonal-plane performance is based on independent contributions from a horizontal-plane system (sensitive to interaural differences) and a vertical-plane system (sensitive to pinna-based spectral changes). Measurements of the stimuli recorded through KEMAR indicated that sources presented from diagonal planes can produce larger interaural level differences (ILDs) in certain frequency regions than would be expected based on the horizontal projection of the trajectory. Such frequency-specific ILD cues may underlie the very good performance reported in previous studies for diagonal spatial resolution. Subjects in the present study could apparently not take advantage of these cues in the diagonal-plane condition, possibly because they did not externalize the images to their appropriate positions in space or possibly because of the absence of a patterned visual field.
Full Text Available The role of attention on multisensory processing is still poorly understood. In particular, it is unclear whether directing attention toward a sensory cue dynamically reweights cue reliability during integration of multiple sensory signals. In this study, we investigated the impact of attention in combining audio-tactile signals in an optimal fashion. We used the Maximum Likelihood Estimation (MLE model to predict audio-tactile spatial localization on the body surface. We developed a new audio-tactile device composed by several small units, each one consisting of a speaker and a tactile vibrator independently controllable by external software. We tested subjects in an attentional and a non-attentional condition. In the attention experiment participants performed a dual task paradigm: they were required to evaluate the duration of a sound while performing an audio-tactile spatial task. Three unisensory or multisensory stimuli (conflictual or not conflictual sounds and vibrations arranged along the horizontal axis were presented sequentially. In the primary task subjects had to evaluate the position of the second stimulus (the probe with respect to the others (in a space bisection task. In the secondary task they had to report occasionally changes in duration of the second auditory stimulus. In the non-attentional task participants had only to perform the primary task (space bisection. Our results showed enhanced auditory precision (and auditory weights in the auditory attentional condition with respect to the control non-attentional condition. Interestingly in both conditions the multisensory results are well predicted by the MLE model. The results of this study support the idea that modality-specific attention modulates multisensory integration.
Michael J Boivin
Full Text Available BACKGROUND: Using the Kaufman Assessment Battery for Children (K-ABC Conant et al. (1999 observed that visual and auditory working memory (WM span were independent in both younger and older children from DR Congo, but related in older American children and in Lao children. The present study evaluated whether visual and auditory WM span were independent in Ugandan and Senegalese children. METHOD: In a linear regression analysis we used visual (Spatial Memory, Hand Movements and auditory (Number Recall WM along with education and physical development (weight/height as predictors. The predicted variable in this analysis was Word Order, which is a verbal memory task that has both visual and auditory memory components. RESULTS: Both the younger (8.5 yrs Ugandan children had auditory memory span (Number Recall that was strongly predictive of Word Order performance. For both the younger and older groups of Senegalese children, only visual WM span (Spatial Memory was strongly predictive of Word Order. Number Recall was not significantly predictive of Word Order in either age group. CONCLUSIONS: It is possible that greater literacy from more schooling for the Ugandan age groups mediated their greater degree of interdependence between auditory and verbal WM. Our findings support those of Conant et al., who observed in their cross-cultural comparisons that stronger education seemed to enhance the dominance of the phonological-auditory processing loop for WM.
Dobreva, Marina S; O'Neill, William E; Paige, Gary D
A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to
This book argues that it is time to rethink reception as a traditional paradigm for understanding the relation between the ancient Greco-Roman traditions and early Judaism and Christianity. The concept of reception implies taking something from one fixed box into another, often a chronological...
Robinson, Philip W.
This thesis addresses the effect of reflections from diffusive architectural surfaces on the perception of echoes and on auditory spatial resolution. Diffusive architectural surfaces play an important role in performance venue design for architectural expression and proper sound distribution. Extensive research has been devoted to the prediction and measurement of the spatial dispersion. However, previous psychoacoustic research on perception of reflections and the precedence effect has focused on specular reflections. This study compares the echo threshold of specular reflections, against those for reflections from realistic architectural surfaces, and against synthesized reflections that isolate individual qualities of reflections from diffusive surfaces, namely temporal dispersion and spectral coloration. In particular, the activation of the precedence effect, as indicated by the echo threshold is measured. Perceptual tests are conducted with direct sound, and simulated or measured reflections with varying temporal dispersion. The threshold for reflections from diffusive architectural surfaces is found to be comparable to that of a specular re ection of similar energy rather than similar amplitude. This is surprising because the amplitude of the dispersed re ection is highly attenuated, and onset cues are reduced. This effect indicates that the auditory system is integrating re ection response energy dispersed over many milliseconds into a single stream. Studies on the effect of a single diffuse reflection are then extended to a full architectural enclosure with various surface properties. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. It is found that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from at rather than diffusive surfaces. Additionally, subjective impressions are
Neil M McLachlan
Full Text Available Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.
Campbell, Robert A. A.; King, Andrew J; Nodal, Fernando R.; Schnupp, Jan W. H.; Carlile, Simon; Doubell, Timothy P.
Auditory neurons in the superior colliculus (SC) respond preferentially to sounds from restricted directions to form a map of auditory space. The development of this representation is shaped by sensory experience, but little is known about the relative contribution of peripheral and central factors to the emergence of adult responses. By recording from the SC of anesthetized ferrets at different age points, we show that the map matures gradually after birth; the spatial receptive fields (SRFs...
Full Text Available Auditory perceptual and visual-spatial characteristics of subjective tinnitus evoked by eye gaze were studied in two adult human subjects. This uncommon form of tinnitus occurred approximately 4-6 weeks following neurosurgery for gross total excision of space Occupying lesions of the cerebellopontine angle and hearing was lost in the operated ear. In both cases, the gaze evoked tinnitus was characterized as being tonal in nature, with pitch and loudness percepts remaining constant as long as the same horizontal or vertical eye directions were maintained. Tinnitus was absent when the eyes were in a neutral head referenced position with subjects looking straight ahead. The results and implications of ophthalmological, standard and modified visual field assessment, pure tone audio metric assessment, spontaneous otoacoustic emission testing and detailed psychophysical assessment of pitch and loudness are discussed
Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio tas...
Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara
This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. PMID:25443988
James Engle; Gregg H Recanzone
Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal latera...
Deneux, Thomas; Kempf, Alexandre; Daret, Aurélie; Ponsot, Emmanuel; Bathellier, Brice
Sound recognition relies not only on spectral cues, but also on temporal cues, as demonstrated by the profound impact of time reversals on perception of common sounds. To address the coding principles underlying such auditory asymmetries, we recorded a large sample of auditory cortex neurons using two-photon calcium imaging in awake mice, while playing sounds ramping up or down in intensity. We observed clear asymmetries in cortical population responses, including stronger cortical activity for up-ramping sounds, which matches perceptual saliency assessments in mice and previous measures in humans. Analysis of cortical activity patterns revealed that auditory cortex implements a map of spatially clustered neuronal ensembles, detecting specific combinations of spectral and intensity modulation features. Comparing different models, we show that cortical responses result from multi-layered nonlinearities, which, contrary to standard receptive field models of auditory cortex function, build divergent representations of sounds with similar spectral content, but different temporal structure. PMID:27580932
Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial
Robinson, Philip W; Pätynen, Jukka; Lokki, Tapio; Jang, Hyung Suk; Jeon, Jin Yong; Xiang, Ning
In musical or theatrical performance, some venues allow listeners to individually localize and segregate individual performers, while others produce a well blended ensemble sound. The room acoustic conditions that make this possible, and the psycho-acoustic effects at work are not fully understood. This research utilizes auralizations from measured and simulated performance venues to investigate spatial discrimination of multiple acoustic sources in rooms. Signals were generated from measurements taken in a small theater, and listeners in the audience area were asked to distinguish pairs of speech sources on stage with various spatial separations. This experiment was repeated with the proscenium splay walls treated to be flat, diffusive, or absorptive. Similar experiments were conducted in a simulated hall, utilizing 11 early reflections with various characteristics, and measured late reverberation. The experiments reveal that discriminating the lateral arrangement of two sources is possible at narrower separation angles when reflections come from flat or absorptive rather than diffusive surfaces. PMID:23742348
Bremner, J. Gavin; Slater, Alan M.; Scott P Johnson; Mason, Ursula; Spring, Joanne; Bremner, Maggie E.
From birth, infants detect associations between the locations of static visual objects and sounds they emit, but there is limited evidence regarding their sensitivity to the dynamic equivalent when a sound-emitting object moves. In 4 experiments involving thirty-six 2-month-olds, forty-eight 5-month-olds, and forty-eight 8-month-olds, we investigated infants' ability to process this form of spatial colocation. Whereas there was no evidence of spontaneous sensitivity, all age groups detected a...
Carlson, Nicole L.; Ming, Vivienne L.; Michael Robert Deweese
Author Summary The receptive field of a neuron can be thought of as the stimulus that most strongly causes it to be active. Scientists have long been interested in discovering the underlying principles that determine the structure of receptive fields of cells in the auditory pathway to better understand how our brains process sound. One possible way of predicting these receptive fields is by using a theoretical model such as a sparse coding model. In such a model, each sound is represented by...
Auditory Processing Disorders Auditory processing disorders (APDs) are referred to by many names: central auditory processing disorders , auditory perceptual disorders , and central auditory disorders . APDs ...
Full Text Available Age-related hearing deficits are a leading cause of disability among the aged. While some forms of hearing deficits are peripheral in origin, others are centrally mediated. One such deficit is the ability to localize sounds, a critical component for segregating different acoustic objects and events, which is dependent on the auditory cortex. Recent evidence indicates that in aged animals the normal sharpening of spatial tuning between neurons in primary auditory cortex to the caudal lateral field does not occur as it does in younger animals. As a decrease in inhibition with aging is common in the ascending auditory system, it is possible that this lack of spatial tuning sharpening is due to a decrease in inhibition at different periods within the response. It is also possible that spatial tuning was decreased as a consequence of reduced inhibition at non-best locations. In this report we found that aged animals did have greater activity throughout the response period, but primarily during the onset of the response. This was most prominent at non-best directions, consistent with the hypothesis that inhibition is a primary mechanism to sharpen spatial tuning curves. We also noted that in aged animals the latency of the response was much shorter than in younger animals, consistent with a decrease in pre-onset inhibition. These results can be interpreted in the context of a failure of the timing and efficiency of feed-forward thalamo-cortical and cortico-cortical circuits in aged animals. Such a mechanism, if generalized across cortical areas, could play a major role in age-related cognitive decline.
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. PMID:25726291
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Lerner, Y.; Honey, C.J.; Silbert, L.J.; Hasson, U.
Real life activities, such as watching a movie or engaging in conversation, unfold over many minutes. In the course of such activities the brain has to integrate information over multiple time scales. We recently proposed that the brain uses similar strategies for integrating information across space and over time. Drawing a parallel with spatial receptive fields (SRF), we defined the temporal receptive window(TRW) of a cortical microcircuit as the length of time prior to a response during which sensory information may affect that response. Our previous findings in the visual system are consistent with the hypothesis that TRWs become larger when moving from low-level sensory to high-level perceptual and cognitive areas. In this study, we mapped TRWs in auditory and language areas by measuring fMRI activity in subjects listening to a real life story scrambled at the time scales of words, sentences and paragraphs. Our results revealed a hierarchical topography of TRWs. In early auditory cortices (A1+), brain responses were driven mainly by the momentary incoming input and were similarly reliable across all scrambling conditions. In areas with an intermediate TRW, coherent information at the sentence time scale or longer was necessary to evoke reliable responses. At the apex of the TRW hierarchy we found parietal and frontal areas which responded reliably only when intact paragraphs were heard in a meaningful sequence. These results suggest that the time scale of processing is a functional property that may provide a general organizing principle for the human cerebral cortex. PMID:21414912
Education and Technology Transfer Unit/ETT-EC
Friday 15.10.2004 CERN 50th Anniversary articles will be sold in the Main Building, ground floor on Friday 15th October from 10h00 to 16h00. T-shirt, (S, M, L, XL) 20.- K-way (M, L, XL) 20.- Silk tie (2 models) 30.- Einstein tie 45.- Umbrella 20.- Caran d'Ache pen 5.- 50th Anniversary Pen 5.- Kit of Cartoon Album & Crayons 10.- All the articles are also available at the Reception Shop in Building 33 from Monday to Saturday between 08.30 and 17.00 hrs. Education and Technology Transfer Unit/ETT-EC
Campbell, Robert A A; King, Andrew J; Nodal, Fernando R; Schnupp, Jan W H; Carlile, Simon; Doubell, Timothy P
Auditory neurons in the superior colliculus (SC) respond preferentially to sounds from restricted directions to form a map of auditory space. The development of this representation is shaped by sensory experience, but little is known about the relative contribution of peripheral and central factors to the emergence of adult responses. By recording from the SC of anesthetized ferrets at different age points, we show that the map matures gradually after birth; the spatial receptive fields (SRFs) become more sharply tuned and topographic order emerges by the end of the second postnatal month. Principal components analysis of the head-related transfer function revealed that the time course of map development is mirrored by the maturation of the spatial cues generated by the growing head and external ears. However, using virtual acoustic space stimuli, we show that these acoustical changes are not by themselves responsible for the emergence of SC map topography. Presenting stimuli to infant ferrets through virtual adult ears did not improve the order in the representation of sound azimuth in the SC. But by using linear discriminant analysis to compare different response properties across age, we found that the SRFs of infant neurons nevertheless became more adult-like when stimuli were delivered through virtual adult ears. Hence, although the emergence of auditory topography is likely to depend on refinements in neural circuitry, maturation of the structure of the SRFs (particularly their spatial extent) can be largely accounted for by changes in the acoustics associated with growth of the head and ears. PMID:18987192
Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.
Human implantation is a complex process requiring synchrony between a healthy embryo and a functionally competent or receptive endometrium. Diagnosis of endometrial receptivity (ER) has posed a challenge and so far most available tests have been subjective and lack accuracy and a predictive value. Microarray technology has allowed identification of the transcriptomic signature of the window of receptivity window of implantation (WOI). This technology has led to the development of a molecular ...
Lunceford, Blair E; Kubanek, Julia
Many organisms encounter noxious or unpalatable compounds in their diets. Thus, a robust reception-system for aversive taste is necessary for an individual's survival; however, mechanisms for perceiving aversive taste vary among organisms. Possession of a system sensitive to aversive taste allows for recognition of a vast array of noxious molecules via membrane-bound receptors, co-receptors, and ion channels. These receptor-ligand interactions trigger signal transduction pathways resulting in activation of nerves and in neural processing, which in turn dictates behavior, including rejection of the noxious item. The impacts of these molecular processes on behavior differ among species, and these differences have impacts at the ecosystem level by driving feeding-behavior, organization of communities, and ultimately, speciation. For example, when comparing mammalian carnivores and herbivores, it is not surprising that herbivores that encounter a variety of toxic plants in their diets express a larger number of aversive taste receptors than carnivores. Comparing the molecular mechanisms and ecological consequences of aversive-taste reception among organisms in a variety of types of ecosystems and ecological niches will illuminate the role of taste in ecology and evolution. PMID:26025470
... field differ in their opinions about the potential benefits of hearing aids, cochlear implants, and other technologies for people with auditory neuropathy. Some professionals report that hearing aids and personal listening devices such as frequency modulation (FM) systems are ...
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…
Bremner, J. Gavin; Slater, Alan M.; Scott P Johnson; Mason, Uschi C.; Spring, Jo; Bremner, Maggie E.
From birth, infants detect associations between the locations of static visual objects and sounds they emit, but there is limited evidence regarding their sensitivity to the dynamic equivalent when a sound-emitting object moves. In four experiments involving 36 2-month-olds, 48 5-month-olds and 48 8-month-olds, we investigated infants’ ability to process this form of spatial co-location. Whereas there was no evidence of spontaneous sensitivity, all age groups detected a dynamic co-location du...
Atencio, Craig A.; Sharpee, Tatyana O.; Schreiner, Christoph E.
Sensory cortical anatomy has identified a canonical microcircuit underlying computations between and within layers. This feed-forward circuit processes information serially from granular to supragranular and to infragranular layers. How this substrate correlates with an auditory cortical processing hierarchy is unclear. We recorded simultaneously from all layers in cat primary auditory cortex (AI) and estimated spectrotemporal receptive fields (STRFs) and associated nonlinearities. Spike-trig...
Roberts, Katherine Leonie
The auditory attention skills of alterting, orienting, and executive control were assessed using behavioural and neuroimaging techniques. Initially, an auditory analgue of the visual attention network test (ANT) (FAN, McCandliss, Sommer, Raz, & Posner, 2002) was created and tested alongside the visual ANT in a group of 40 healthy subjects. The results from this study showed similarities between auditory and visual spatial orienting. An fMRI study was conducted to investigate whether the simil...
Cooper, Neil, fl 1983-2004, photographer
A slide showing Indira Gandhi, Indian Prime Minister and Chair of the Conference, Kenneth Kaunda, President of Zambia, and Robert and Sally Mugabe, President and First Lady of Zimbabwe, in discussion at the President's Reception.
At a reception on 28 January, the CERN management presented their best wishes for 2009 to politicians and representatives of the administrations in the local area, and diplomats representing CERN’s Member States, Observer States and other countries.
Britvina, T; Eggermont, J J
It is often implied that during the occurrence of spindle oscillations, thalamocortical neurons do not respond to signals from the outside world. Since recording of sound-evoked activity from cat auditory cortex is common during spindling this implies that sound stimulation changes the spindle-related brain state. Local field potentials and multi-unit activity recorded from cat primary auditory cortex under ketamine anesthesia during successive silence-stimulus-silence conditions were used to investigate the effect of sound on cortical spindle oscillations. Multi-frequency stimulation suppresses spindle waves, as shown by the decrease of spectral power within the spindle frequency range during stimulation as compared with the previous silent period. We show that the percentage suppression is independent of the power of the spindle waves during silence, and that the suppression of spindle power occurs very fast after stimulus onset. The global inter-spindle rhythm was not disturbed during stimulation. Spectrotemporal and correlation analysis revealed that beta waves (15-26 Hz), and to a lesser extent delta waves, were modulated by the same inter-spindle rhythm as spindle oscillations. The suppression of spindle power during stimulation had no effect on the spatial correlation of spindle waves. Firing rates increased under stimulation and spectro-temporal receptive fields could reliably be obtained. The possible mechanism of suppression of spindle waves is discussed and it is suggested that suppression likely occurs through activity of the specific auditory pathway. PMID:18164553
Christison-Lagay, Kate L.; Cohen, Yale E.
Perceptual representations of auditory stimuli (i.e., sounds) are derived from the auditory system’s ability to segregate and group the spectral, temporal, and spatial features of auditory stimuli—a process called “auditory scene analysis”. Psychophysical studies have identified several of the principles and mechanisms that underlie a listener’s ability to segregate and group acoustic stimuli. One important psychophysical task that has illuminated many of these principles and mechanisms is th...
Members of the personnel are invited to take note that only parcels corresponding to official orders or contracts will be handled at CERN. Individuals are not authorised to have private merchandise delivered to them at CERN and private deliveries will not be accepted by the Goods Reception services. Thank you for your understanding.
In this article I will argue for the benefits of receptive skills development (i.e. reading and listening) with children (seven to eleven) at beginner/elementary levels who are able to recognise words in print. I will then outline objectives and discuss text and task selection.
Eggermont, Jos J
The spectro-temporal receptive field (STRF) is frequently used to characterize the linear frequency-time filter properties of the auditory system up to the neuron recorded from. STRFs are extremely stimulus dependent, reflecting the strong non-linearities in the auditory system. Changes in the STRF with stimulus type (tonal, noise-like, vocalizations), sound level and spectro-temporal sound density are reviewed here. Effects on STRF shape of task and attention are also briefly reviewed. Models to account for these changes, potential improvements to STRF analysis, and implications for neural coding are discussed. PMID:20123121
Renata Coelho Marchezan
Full Text Available There are several receptions of the Bakhtinian work: those which situate it in a cultural and historical perspective, making it possible to understand the context inherent to it, the interchanges with which it was instituted and its development paths; those which separately take one or other of its ideas, and those which search to infer a less or more systemized framework from it in order to consider a specific object. When we concentrate on those last ones and on the field of studies about language, we examine the receptions of the Bakhtinian thought as a pragmatics, a sociolinguistics, a semiotics, a social theory, a theory of the discourse. The perspective of this branch of instruction is the one on which we lastly focus in order to reflect upon some of its fundamental basis.
Grow, Laura L; Carr, James E; Kodak, Tiffany M; Jostad, Candice M; Kisamore, April N
Many early intervention curricular manuals recommend teaching auditory-visual conditional discriminations (i.e., receptive labeling) using the simple-conditional method in which component simple discriminations are taught in isolation and in the presence of a distracter stimulus before the learner is required to respond conditionally. Some have argued that this procedure might be susceptible to faulty stimulus control such as stimulus overselectivity (Green, 2001). Consequently, there has bee...
social media can help us better understand the participatory media culture that has established itself over the past decade. To properly address the question of meaning, however, reception research needs to be adapted to the current media landscape. Taking my point of departure in the multi...... model for its potential to provide a portrait of the participatory media culture that stands in contrast to its understanding as exploitation of labor (Scholz, 2013) or as a business model (van Dicjk, 2013) disguised as false consciousness. The paper will revisit the five dimensions of the model......, which appears increasingly complex, multi-formed and integrated to the audience. The original dimensions of Schrøder’s model need to be looked at with reference to both reception and circulation (Jenkins et al., 2013), and to the network that binds participatory media culture. It appears that with media...
List, Alexandra; Justus, Timothy
Asymmetric distribution of function between the cerebral hemispheres has been widely investigated in the auditory modality. The current approach borrows heavily from visual local-global research in an attempt to determine whether, as in vision, local-global auditory processing is lateralized. In vision, lateralized local-global processing likely relies on spatial frequency information. Drawing analogies between visual spatial frequency and auditory dimensions, two sets of auditory stimuli wer...
Lotfi, Yones; Moosavi, Abdollah; Bakhshi, Enayatollah; Sadjedi, Hamed
Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim of this research was investigating effects of an auditory lateralization training on speech perception in presence of noise/competing signals in children suspected to (C)APD. Subjects and Methods In this analytical interventional study, 60 children suspected to (C)APD were selected based on multiple auditory processing assessment subtests. They were randomly divided into two groups: control (mean age 9.07) and training groups (mean age 9.00). Training program consisted of detection and pointing to sound sources delivered with interaural time differences under headphones for 12 formal sessions (6 weeks). Spatial word recognition score (WRS) and monaural selective auditory attention test (mSAAT) were used to follow the auditory lateralization training effects. Results This study showed that in the training group, mSAAT score and spatial WRS in noise (p value≤0.001) improved significantly after the auditory lateralization training. Conclusions We used auditory lateralization training for 6 weeks and showed that auditory lateralization can improve speech understanding in noise significantly. The generalization of this results needs further researches.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Qiang, Lin; Clarke, Chris
Previously, most mammalian auditory systems research has concentrated on human sensory perception whose frequencies are lower than 20 kHz. The implementations almost always used analog VLSI design. Due to the complexity of the model, it is difficult to implement these algorithms using current digital technology. This paper introduces a simplified model of biosonic reception system in bats and its implementation in the ``Chiroptera Inspired Robotic CEphaloid'' (CIRCE) project. This model consists of bandpass filters, a half-wave rectifier, low-pass filters, automatic gain control, and spike generation with thresholds. Due to the real-time requirements of the system, the system employs Butterworth filters and advanced field programmable gate array (FPGA) architectures to provide a viable solution. The ultrasonic signal processing is implemented on a Xilinx FPGA Virtex II device in real time. In the system, 12-bit input echo signals from receivers are sampled at 1 M samples per second for a signal frequency range from 20 to 200 kHz. The system performs a 704-channel per ear auditory pipeline operating in real time. The output of the system is a coded time series of threshold crossing points. Comparing hardware implementation with fixed-point software, the system shows significant performance gains with no loss of accuracy.
Full Text Available The object of study case is to analyze the quality of the logistics department, focusing on the audit process. Purpose of this paper is to present the advantages resulting from the systematic audit processes and methods of analysis and improvement of nonconformities found. The case study is realised at SC Miele Tehnica SRL Brasov, twelfth production line, and the fourth from outside Germany. The specific objectives are: clarifying the concept of audit quality, emphasizing requirements ISO 19011:2003 "Guidelines for auditing quality management systems and / or environment" on audits; cchieving quality audit and performance analysis; improved process performance reception materials; compliance with legislation and auditing standards applicable in EU and Romania.
Members of the personnel are invited to take note that only parcels corresponding to official orders or contracts will be handled at CERN. Individuals are not authorised to have private merchandise delivered to them at CERN and private deliveries will not be accepted by the Goods Reception services. Goods Reception Services
Property owners living close to a proposed 500-kV transmission line route in Ontario expressed concerns that the line would affect their television reception. To give a reasonable evaluation of the impact of the transmission line, tests were conducted before and after installation of the line in which the possibility of active or passive interference to reception was assessed. Measurements were made of signal strength and ambient noise, and television reception was also recorded on videotape. Possible transmission line effects due to radiated noise, signal reduction, and ghosts are analyzed. The analysis of signal and noise conditions, and the assessment of videotaped reception, provide reasonable evidence that the line has had negligible impact on the television reception along the line route. 13 refs., 18 figs., 12 tabs
Hubbard, Timothy L
The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear). PMID:20192565
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition in receptive identification tasks under 2 conditions: when the sample was presented before the comparisons (sample first) and when the comparisons were presented before the sample (comparison first). Participants included 4 typically developing kindergarten-age boys. Stimuli, which included birds and flags, were presented on a computer screen. Acquisition in the 2 conditions was compared in an adapted alternating-treatments design combined with a multiple baseline design across stimulus sets. All participants took fewer trials to meet the mastery criterion in the sample-first condition than in the comparison-first condition. PMID:26511078
Arne F Meyer
Full Text Available Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and
J Gordon Millichap
The clinical and familial characteristics of severe receptive specific language impairment (SLI) were studied in 58 affected children (ratio of boys to girls 2:1) at the Department of Child Life and Health, University of Edinburgh, Scotland.
Numerous analytical and experimental investigations related to SPS microwave power transmission and reception are reported. Aspects discussed include system performance, phase control, power amplifiers, radiating elements, rectenna, solid state configurations, and planned program activities.
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features. PMID:22271265
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.
The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…
Ideas of Virgil´s `reception´ and of his mythical `biography´ can both be illuminated by an exploration of Virgil´s role as a constructed character in his own poetry. The consensus between some earlier Roman responses to Virgil and the traditions of commentary on the poet from later in antiquity informs the following discussion of the poet´s individual presence in the performance and reception of his work. Earlier sources, which show an interest in the development of Virgil´s work...
Dalton, Polly; Lavie, Nilli
Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when ...
We examine the efficacy of streamwise traveling waves generated by a zero-net-mass-flux surface blowing and suction for controlling the onset of turbulence in a channel flow. For small amplitude actuation, we utilize weakly nonlinear analysis to determine base flow modifications and to assess the resulting net power balance. Receptivity analysis of the velocity fluctuations around this base flow is then employed to design the traveling waves. Our simulation-free approach reveals that, relative to the flow with no control, the downstream traveling waves with properly designed speed and frequency can significantly reduce receptivity which makes them well-suited for controlling the onset of turbulence. In contrast, the velocity fluctuations around the upstream traveling waves exhibit larger receptivity to disturbances. Our theoretical predictions, obtained by perturbation analysis (in the wave amplitude) of the linearized Navier-Stokes equations with spatially periodic coefficients, are verified using full-scale...
Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle
Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532
Lenarz, T; Lim, H; Joseph, G; Reuter, G; Lenarz, M
Deaf patients with severe sensory hearing loss can benefit from a cochlear implant (CI), which stimulates the auditory nerve fibers. However, patients who do not have an intact auditory nerve cannot benefit from a CI. The majority of these patients are neurofibromatosis type 2 (NF2) patients who developed neural deafness due to growth or surgical removal of a bilateral acoustic neuroma. The only current solution is the auditory brainstem implant (ABI), which stimulates the surface of the cochlear nucleus in the brainstem. Although the ABI provides improvement in environmental awareness and lip-reading capabilities, only a few NF2 patients have achieved some limited open set speech perception. In the search for alternative procedures our research group in collaboration with Cochlear Ltd. (Australia) developed a human prototype auditory midbrain implant (AMI), which is designed to electrically stimulate the inferior colliculus (IC). The IC has the potential as a new target for an auditory prosthesis as it provides access to neural projections necessary for speech perception as well as a systematic map of spectral information. In this paper the present status of research and development in the field of central auditory prostheses is presented with respect to technology, surgical technique and hearing results as well as the background concepts of ABI and AMI. PMID:19517084
Fournier, Julien; Monier, Cyril; Pananceau, Marc; Frégnac, Yves
Receptive fields in primary visual cortex (V1) are categorized as simple or complex, depending on their spatial selectivity to stimulus contrast polarity. We studied the dependence of this classification on visual context by comparing, in the same cell, the synaptic responses to three classical receptive field mapping protocols: sparse noise, ternary dense noise and flashed Gabor noise. Intracellular recordings revealed that the relative weights of simple-like and complex-like receptive field components were scaled so as to make the same receptive field more simple-like with dense noise stimulation and more complex-like with sparse or Gabor noise stimulations. However, once these context-dependent receptive fields were convolved with the corresponding stimulus, the balance between simple-like and complex-like contributions to the synaptic responses appeared to be invariant across input statistics. This normalization of the linear/nonlinear input ratio suggests a previously unknown form of homeostatic control of V1 functional properties, optimizing the network nonlinearities to the statistical structure of the visual input. PMID:21765424
Lin, Nay; Reed, Helen L.; Saric, W. S.
The receptivity to freestream sound of the laminar boundary layer over a semi-infinite flat plate with an elliptic leading edge is simulated numerically. The incompressible flow past the flat plate is computed by solving the full Navier-Stokes equations in general curvilinear coordinates. A finite-difference method which is second-order accurate in space and time is used. Spatial and temporal developments of the Tollmien-Schlichting wave in the boundary layer, due to small-amplitude time-harmonic oscillations of the freestream velocity that closely simulate a sound wave travelling parallel to the plate, are observed. The effect of leading-edge curvature is studied by varying the aspect ratio of the ellipse. The boundary layer over the flat plate with a sharper leading edge is found to be less receptive. The relative contribution of the discontinuity in curvature at the ellipse-flat-plate juncture to receptivity is investigated by smoothing the juncture with a polynomial. Continuous curvature leads to less receptivity. A new geometry of the leading edge, a modified super ellipse, which provides continuous curvature at the juncture with the flat plate, is used to study the effect of continuous curvature and inherent pressure gradient on receptivity.
van Uden, Antoine M. J.
This paper identifies characteristics of poor speechreaders, defines developmental dyspraxia in profoundly hearing-impaired children, and outlines the speechreading process. An active training method is described in which expressive and receptive skills are integrated, by having hearing-impaired people speechread their own speech via videotape…
The U.K. National Health Services Emergency Reception of victims of accidents involving radiation was reviewed. A shortfall exists with inadequate provision of coordinated central funding, facilities and training. 50% of NAIR designated hospitals lacked a shower for decontamination. A Casualty Surgeons Association Broadsheet is presented which addresses some of these shortcomings. (Author)
van Besouw, J.; van Dongen, J.A.E.F.
This article reviews the early academic and public reception of Albert Einstein's theory of relativity in the Netherlands, particularly after Arthur Eddington's eclipse experiments of 1919. Initially, not much attention was given to relativity, as it did not seem an improvement over Hendrik A. Loren
Full Text Available With so much attention on the issue of voice in democratic theory, the inverse question of how people come to listen remains a marginal one. Recent scholarship in affect and neuroscience reveals that cognitive and verbal strategies, while privileged in democratic politics, are often insufficient to cultivate the receptivity that constitutes the most basic premise of democratic encounters. This article draws on this scholarship and a recent case of forum theatre to examine the conditions of receptivity and responsiveness, and identify specific strategies that foster such conditions. It argues that the forms of encounter most effective in cultivating receptivity are those that move us via affective intensity within pointedly mediated contexts. It is this constellation of strategies—this strange marriage of immersion and mediation—that enabled this performance to surface latent memory, affect and bias, unsettle entrenched patterns of thought and behaviour, and provide the conditions for revisability. This case makes clear that to lie beyond the domain of cognitive and verbal processes is not to lie beyond potential intervention, and offers insight to how such receptivity might be achieved in political processes more broadly.
Full Text Available This paper reviews the public reception of the Research Assessment Exercise 1996 (RAE from its announcement in December 1996 to the decline of discussion at end May 1997. A model for diffusion of the RAE is established which distinguishes extra-communal (or exoteric from intra-communal (or esoteric media. The different characteristics of each medium and the changing nature of the discussion over time are considered. Different themes are distinguished in the public reception of the RAE: the spatial distribution of research; the organisation of universities; disciplinary differences in understanding; a perceived conflict between research and teaching; the development of a culture of accountability; and analogies with the organisation of professional football. In conclusion, it is suggested that the RAE and its effects can be more fully considered from the perspective of scholarly communication and understandings of the development of knowledge than it has been by previous contributions in information science, which have concentrated on the possibility of more efficient implementation of existing processes. A fundamental responsibility for funding councils is also identified: to promote the overall health of university education and research, while establishing meaningful differentiations between units.
Pipa, Gordon; Chen, Zhe; Neuenschwander, Sergio; Lima, Bruss; Brown, Emery N
The moving bar experiment is a classic paradigm for characterizing the receptive field (RF) properties of neurons in primary visual cortex (V1). Current approaches for analyzing neural spiking activity recorded from these experiments do not take into account the point-process nature of these data and the circular geometry of the stimulus presentation. We present a novel analysis approach to mapping V1 receptive fields that combines point-process generalized linear models (PPGLM) with tomographic reconstruction computed by filtered-back projection. We use the method to map the RF sizes and orientations of 251 V1 neurons recorded from two macaque monkeys during a moving bar experiment. Our cross-validated goodness-of-fit analyses show that the PPGLM provides a more accurate characterization of spike train data than analyses based on rate functions computed by the methods of spike-triggered averages or first-order Wiener-Volterra kernel. Our analysis leads to a new definition of RF size as the spatial area over which the spiking activity is significantly greater than baseline activity. Our approach yields larger RF sizes and sharper orientation tuning estimates. The tomographic reconstruction paradigm further suggests an efficient approach to choosing the number of directions and the number of trials per direction in designing moving bar experiments. Our results demonstrate that standard tomographic principles for image reconstruction can be adapted to characterize V1 RFs and that two fundamental properties, size and orientation, may be substantially different from what is currently reported. PMID:22734491
Buchholz, Jörg; Favrot, Sylvain Emmanuel
system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception) and...... measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....
This study investigated the relationship between receptive and productive vocabulary size. The experimental design expanded upon earlier methodologies by using equivalent receptive and productive test formats with different receptive and productive target words to provide more accurate results. Translation tests were scored at two levels of…
Tatagiba, M; Gharabaghi, A
Perceptional benefits and potential risks of electrical stimulation of the central auditory system are constantly changing due to ongoing developments and technical modifications. Therefore, we would like to introduce current treatment protocols and strategies that might have an impact on functional results of auditory brainstem implants (ABI) in profoundly deaf patients. Patients with bilateral tumours as a result of neurofibromatosis type 2 with complete dysfunction of the eighth cranial nerves are the most frequent candidates for auditory brainstem implants. Worldwide, about 300 patients have already received an ABI through a translabyrinthine or suboccipital approach supported by multimodality electrophysiological monitoring. Patient selection is based on disease course, clinical signs, audiological, radiological and psycho-social criteria. The ABI provides the patients with access to auditory information such as environmental sound awareness together with distinct hearing cues in speech. In addition, this device markedly improves speech reception in combination with lip-reading. Nonetheless, there is only limited open-set speech understanding. Results of hearing function are correlated with electrode design, number of activated electrodes, speech processing strategies, duration of pre-existing deafness and extent of brainstem deformation. Functional neurostimulation of the central auditory system by a brainstem implant is a safe and beneficial procedure, which may considerably improve the quality of life in patients suffering from deafness due to bilateral retrocochlear lesions. The auditory outcome may be improved by a new generation of microelectrodes capable of penetrating the surface of the brainstem to access more directly the auditory neurons. PMID:15986735
Dalton, Polly; Lavie, Nilli
Attentional capture by color singletons during shape search can be eliminated when the target is not a feature singleton (Bacon & Egeth, 1994). This suggests that a "singleton detection" search strategy must be adopted for attentional capture to occur. Here we find similar effects on auditory attentional capture. Irrelevant high-intensity singletons interfered with an auditory search task when the target itself was also a feature singleton. However, singleton interference was eliminated when the target was not a singleton (i.e., when nontargets were made heterogeneous, or when more than one target sound was presented). These results suggest that auditory attentional capture depends on the observer's attentional set, as does visual attentional capture. The suggestion that hearing might act as an early warning system that would always be tuned to unexpected unique stimuli must therefore be modified to accommodate these strategy-dependent capture effects. PMID:17557587
Atencio, Craig A.; Schreiner, Christoph E
Excitatory pyramidal neurons and inhibitory interneurons constitute the main elements of cortical circuitry and have distinctive morphologic and electrophysiological properties. Here, we differentiate them by analyzing the time course of their action potentials (APs) and characterizing their receptive field properties in auditory cortex. Pyramidal neurons have longer APs and discharge as Regular-Spiking Units (RSUs), while basket and chandelier cells, which are inhibitory interneurons, have s...
Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...... of sound as an active component in shaping urban environments. As urban conditions spreads globally, new scales, shapes and forms of communities appear and call for new distinctions and models in the study and representation of sonic environments. Particularly so, since urban environments...
Berthele, Raphael; Wittlin, Gabriele
In this paper a particular context where receptive multilingualism at work can be observed is discussed. The Swiss armed forces underwent a series of quite dramatic downsizing measures, which lead to a situation with increased amount of mixed groups and linguistically mixed situations regarding the first/native language of officers and the first/native languages of the recruits. Although there are some minimal dispositions in the official documents regarding the right of recruits to benefit f...
In my thesis I explore the potential of non-visual components of sculptural artworks. For that purpose I define reception and perception. I introduce senses and sculptural artworks of 20th century that address the specific sense. I examine reasons and consequences of favored treatment of vision and neglection of other senses, as well as the situation of people with blindness and visual impairement in today's visual culture. I committed my own artistic expression to create sculptural artwor...
Bjerre, Thomas Ærvold
The essay covers the critical reception of Mississippi-writer Lewis Nordan from his debut in 1983 to the boost in scholarly attention in the new millennium. The essay covers newspaper reviews but pays particular attention to the many academic essays that have placed Nordan as a writer in the...... southern literary tradition and have highlighted themes such as magical realism, the grotesque, race relations, music, and gender....
Gál, Viktor; Hámori, J.; Roska, Tamás; Bálya, Dávid; Borostyánkői, ZS; Brendel, M; Lotz, K; Négyessy, L.; Orzó, László; Petrás, István; Rekeczky, Csaba; Takács, J.; Venetiáner, P.; Vidnyánszky, Z.; Zarándy, Ákos
In this paper we demonstrate the potential of the cellular nonlinear/neural network paradigm (CNN) that of the analogic cellular computer architecture (called CNN Universal Machine | CNN-UM) in modeling different parts and aspects of the nervous system. The structure of the living sensory systems and the CNN share a lot of features in common: local interconnections ("receptive field architecture"), nonlinear and delayed synapses for the processing tasks, the potentiality of feedback and u...
Jung Hoon Lee
Full Text Available The neural correlates that relate auditory categorization to aspects of goal-directed behavior, such as decision-making, are not well understood. Since the prefrontal cortex plays an important role in executive function and the categorization of auditory objects, we hypothesized that neural activity in the prefrontal cortex (PFC should predict an animal's behavioral reports (decisions during a category task. To test this hypothesis, we tested PFC activity that was recorded while monkeys categorized human spoken words (Russ et al., 2008b. We found that activity in the ventrolateral PFC, on average, correlated best with the monkeys' choices than with the auditory stimuli. This finding demonstrates a direct link between PFC activity and behavioral choices during a non-spatial auditory task.
The spatial-temporal response properties of some simple neurons in visual pathway arise basically prior to birth. In the absence of visual experience, how do these neurons develop in visual system? Based on Wimbauer network with delay, a four-layer feed-forward network model is proposed, which is characterized by modifying the Hebb learning rule through introducing the asymmetric time window of synaptic modification found recently in neurobiology. The model can not only generate by self-organization more diversified spatial-temporal response characteristics of neuronal receptive field than earlier models but also provide some explanations for the possible mechanism underlying the development of receptive fields of contrast polarity sensitive neurons found in visual system of vertebrate. Thus the proposed model may be more widely applicable than Linsker model and Wimbauer model.
Vercillo, Tiziana; Burr, David; Gori, Monica
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind children (9 to 14 years old). Children performed 2 spatial tasks (minimum audible angle and space bisection) and 1 temporal task (temporal bisection). There was no impairment in the temporal task for blind children but, like adults, they showed severely compromised thresholds for spatial bisection. Interestingly, the blind children also showed lower precision in judging minimum audible angle. These results confirm the adult study and go on to suggest that even simpler auditory spatial tasks are compromised in children, and that this capacity recovers over time. (PsycINFO Database Record PMID:27228448
Cleary, Miranda; Pisoni, David B; Kirk, Karen Iler
The present study investigated whether individual differences in working memory could account for a significant proportion of the variance in the open-set word recognition and receptive vocabulary skills of prelingually deafened, pediatric cochlear implant recipients, after the contribution of known predictors was taken into account. The contributions of four measures of working memory were examined separately for children using oral communication (OC) (n = 32) and Total Communication (TC) (n = 29). Wechsler Intelligence Scale for Children-Third Edition (WISC) digit-spans, requiring immediate recall of auditory-only lists in both forwards and backwards directions were, collected. Two versions of a novel "memory span game" were also administered: One required memory for sequences of colored lights; the other assessed memory for colored lights presented in conjunction with auditory color-names. A contribution from working memory was observed only for the span tasks that incorporated an auditory processing component. These results suggest a relationship between working memory and the examined outcome measures that is specific to the auditory modality, partially linked to communication mode, and not related to individual differences in a general-purpose component of working memory. PMID:21666765
Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.
Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.
Chien, Sung-En; Ono, Fuminori; Watanabe, Katsumi
Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964). In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived visual offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived visual offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes. PMID:23439729
Zigmond, Naomi K.; Cicci, Regina
The monograph discusses the psycho-physiological operations for processing of auditory information, the structure and function of the ear, the development of auditory processes from fetal responses through discrimination, language comprehension, auditory memory, and auditory processes related to written language. Disorders of auditory learning…
Wu, Zhigang; Cooper, Jonathan
Active flutter suppression is used to prevent flutter throughout the flight envelope by supplying active control forces in response to vehicle motions. In recent years, studies have been conducted on active flutter suppression using the receptance method. The advantage of the receptance method is that the feedback control gains are purely based upon measured receptances, without any need to evaluate or know the mass, damping, and stiffness matrices of the system. However, determination of the...
Eriksson Barajas, Katarina
The aim of the proposed paper is to increase the knowledge on fiction in use. A combination of reader reception studies (cf. Fish, 1980) and discursive psychology (Edwards & Potter, 1992), which I would like to call discursive reception studies (Eriksson & Aronsson, 2009): that is, a discursive-psychological analysis of reader-reception data is used in the paper. Such approach provides possibilities to analyse the role of social interaction in the co-construction of the experience of ...
Ai, Yu; Xu, Lei; Li, Li; Li, Jianfeng; Luo, Jianfen; Wang, Mingming; Fan, Zhaomin; Wang, Haibo
Conclusions This study shows that the prevalence of auditory neuropathy spectrum disorder (ANSD) in the children with inner auditory canal (IAC) stenosis is much higher than those without IAC stenosis, regardless of whether they have other inner ear anomalies. In addition, the auditory characteristics of ANSD with IAC stenosis are significantly different from those of ANSD without any middle and inner ear malformations. Objectives To describe the auditory characteristics in children with IAC stenosis as well as to examine whether the narrow inner auditory canal is associated with ANSD. Method A total of 21 children, with inner auditory canal stenosis, participated in this study. A series of auditory tests were measured. Meanwhile, a comparative study was conducted on the auditory characteristics of ANSD, based on whether the children were associated with isolated IAC stenosis. Results Wave V in the ABR was not observed in all the patients, while cochlear microphonic (CM) response was detected in 81.1% ears with stenotic IAC. Sixteen of 19 (84.2%) ears with isolated IAC stenosis had CM response present on auditory brainstem responses (ABR) waveforms. There was no significant difference in ANSD characteristics between the children with and without isolated IAC stenosis. PMID:26981851
Talebi, Vargha; Baker, Curtis L
In the visual cortex, distinct types of neurons have been identified based on cellular morphology, response to injected current, or expression of specific markers, but neurophysiological studies have revealed visual receptive field (RF) properties that appear to be on a continuum, with only two generally recognized classes: simple and complex. Most previous studies have characterized visual responses of neurons using stereotyped stimuli such as bars, gratings, or white noise and simple system identification approaches (e.g., reverse correlation). Here we estimate visual RF models of cortical neurons using visually rich natural image stimuli and regularized regression system identification methods and characterize their spatial tuning, temporal dynamics, spatiotemporal behavior, and spiking properties. We quantitatively demonstrate the existence of three functionally distinct categories of simple cells, distinguished by their degree of orientation selectivity (isotropic or oriented) and the nature of their output nonlinearity (expansive or compressive). In addition, these three types have differing average values of several other properties. Cells with nonoriented RFs tend to have smaller RFs, shorter response durations, no direction selectivity, and high reliability. Orientation-selective neurons with an expansive output nonlinearity have Gabor-like RFs, lower spontaneous activity and responsivity, and spiking responses with higher sparseness. Oriented RFs with a compressive nonlinearity are spatially nondescript and tend to show longer response latency. Our findings indicate multiple physiologically defined types of RFs beyond the simple/complex dichotomy, suggesting that cortical neurons may have more specialized functional roles rather than lying on a multidimensional continuum. PMID:26936978
Glazebrook, Cheryl M; Welsh, Timothy N; Tremblay, Luc
Presenting target and non-target information in different modalities influences target localization if the non-target is within the spatiotemporal limits of perceptual integration. When using auditory and visual stimuli, the influence of a visual non-target on auditory target localization is greater than the reverse. It is not known, however, whether or how such perceptual effects extend to goal-directed behaviours. To gain insight into how audio-visual stimuli are integrated for motor tasks, the kinematics of reaching movements towards visual or auditory targets with or without a non-target in the other modality were examined. When present, the simultaneously presented non-target could be spatially coincident, to the left, or to the right of the target. Results revealed that auditory non-targets did not influence reaching trajectories towards a visual target, whereas visual non-targets influenced trajectories towards an auditory target. Interestingly, the biases induced by visual non-targets were present early in the trajectory and persisted until movement end. Subsequent experimentation indicated that the magnitude of the biases was equivalent whether participants performed a perceptual or motor task, whereas variability was greater for the motor versus the perceptual tasks. We propose that visually induced trajectory biases were driven by the perceived mislocation of the auditory target, which in turn affected both the movement plan and subsequent control of the movement. Such findings provide further evidence of the dominant role visual information processing plays in encoding spatial locations as well as planning and executing reaching action, even when reaching towards auditory targets. PMID:26253323
Ghoul, Asila; Reichmuth, Colleen
Because of their dependence on a highly restricted coastal habitat, Enhydra lutris is especially vulnerable to a variety of different environmental and anthropogenic threats. This species is presently listed as threatened and is protected throughout the northern and southern portions of its range.Resource managers are presently faced with uncertainty when responding to and prioritizing potential threats to these animals due to insufficient understanding of the factors that may disturb or disrupt normal behavior patterns both above and below the water's surface. The objective of these studies was to obtain direct measurements of the source characteristics of vocalizations and the limits of auditory reception in Enhydra lutris. These data are necessary to form a basic but essential under-standing of bioacoustics in this species. To further develop this knowledge base, psychoacoustic profiles of aerial and underwater hearing sensitivity as a function of sound frequency are imperative to adequately consider sea otters alongside other marine mammals within the issue of anthropogenic impacts. These studies are presently ongoing i n our laboratory. A s these coastal-living carnivores have only recently transitioned to a marine lifestyle, an improved understanding of their acoustic communication and auditory adaptations will also provide insight into their evolutionary biology and behavioral ecology as well as the evolutionary pressures shaping underwater perception in marine mammals. PMID:22278472
TAO Liming; ZHANG Nan; YE Xiang; ZHOU Yifeng
To investigate the effects of short-term intraocular pressure (IOP) elevation on the receptive field properties of lateral geniculate nucleus (LGN) cells, responses of the LGN cells to annulus, disc and drifting gratings with high or low spatial frequencies have been recorded extracellularly in the cat with the retinal perfusion pressure kept stable (30mmHg). Our results indicated that the responses of the X and Y type LGN cells were significantly weakened during IOP elevation. And the responses varied with the different mechanisms of receptive fields. Specifically, while using annulus and disc as stimuli, the responses of Y cells were more tolerant than X cells to IOP elevation. The surround area of the receptive field was more sensitive to IOP elevation than the center. The mean responses during IOP elevation decreased more than the peak responses did. IOP elevation has more influence on the responses of X cells than on the response of Y cells to the drifting gratings with high spatial frequency. These results may reflect different degrees of ischemia on corresponding retinal structures caused by IOP elevation.
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.
Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec
van Besouw, Jip
This article reviews the early academic and public reception of Albert Einstein's theory of relativity in the Netherlands, particularly after Arthur Eddington's eclipse experiments of 1919. Initially, not much attention was given to relativity, as it did not seem an improvement over Hendrik A. Lorentz' work. This changed after the arrival in Leiden of Paul Ehrenfest. Soon relativity was much studied and lead to controversy among a number of conservative intellectuals, as elsewhere in Europe. The tone of Dutch critics was much more mild, however. This can be understood when one considers Dutch neutrality during World War I. Einstein's political positions were generally positively perceived in Holland, which Dutch academics put to use in their efforts at international reconciliation abroad, and the presentation of theoretical physics at home.
Having introduced the theory of relativity from Japan, the Chinese quickly and enthusiastically embraced it during the May Fourth Movement, virtually without controversy. This unique passion for and openness to relativity, which helped advance the study of theoretical physics in China in the 1930s, was gradually replaced by imported Soviet criticism after 1949. During the Cultural Revolution, radical Chinese ideologues sponsored organized campaigns against Einstein and relativity, inflicting serious damage on Chinese science and scientific education. China's economic reforms in the late 1970s empowered scientists and presented them with the opportunity to rehabilitate Einstein and call for social democracy. Einstein has since become the symbol in China of the unity of science and democracy, the two eminent objectives of the May Fourth Movement that remain to be achieved in full. Using the reception of relativity as a case study, the essay also discusses issues involving the historical study of modern Chinese science. PMID:17970426
Christopher R MURPHY
This review begins with a brief commentary on the diversity of placentation mechanisms, and then goes on to examine the extensive alterations which occur in the plasma membrane of uterine epithelial cells during early pregnancy across species. Ultrastructural, biochemical and more general morphological data reveal that strikingly common phenomena occur in this plasma membrane during early pregnancy despite the diversity of placental types-from epitheliochorial to hemochorial, which ultimately form in different species. To encapsulate the concept that common morphological and molecular alterations occur across species, that they are found basolaterally as well as apically, and that moreover they are an ongoing process during much of early pregnancy, not just an event at the time attachment,brane during early pregnancy are key to uterine receptivity.
Full Text Available In the auditory system, the stimulus-response properties of single neurons are often described in terms of the spectrotemporal receptive field (STRF, a linear kernel relating the spectrogram of the sound stimulus to the instantaneous firing rate of the neuron. Several algorithms have been used to estimate STRFs from responses to natural stimuli; these algorithms differ in their functional models, cost functions, and regularization methods. Here, we characterize the stimulus-response function of auditory neurons using a generalized linear model (GLM. In this model, each cell's input is described by: 1 a stimulus filter (STRF; and 2 a post-spike filter, which captures dependencies on the neuron's spiking history. The output of the model is given by a series of spike trains rather than instantaneous firing rate, allowing the prediction of spike train responses to novel stimuli. We fit the model by maximum penalized likelihood to the spiking activity of zebra finch auditory midbrain neurons in response to conspecific vocalizations (songs and modulation limited (ml noise. We compare this model to normalized reverse correlation (NRC, the traditional method for STRF estimation, in terms of predictive power and the basic tuning properties of the estimated STRFs. We find that a GLM with a sparse prior predicts novel responses to both stimulus classes significantly better than NRC. Importantly, we find that STRFs from the two models derived from the same responses can differ substantially and that GLM STRFs are more consistent between stimulus classes than NRC STRFs. These results suggest that a GLM with a sparse prior provides a more accurate characterization of spectrotemporal tuning than does the NRC method when responses to complex sounds are studied in these neurons.
The article examines two key concepts in research on policy borrowing and lending that are often used to explain why and how educational reforms travel across national boundaries: reception and translation. The studies on reception analyse the political, economic, and cultural reasons that account for the attractiveness of a reform from elsewhere.…
Reyes-Iglesias, Pedro; Alonso-Ramos, C.; Sarmiento-Merenguel, Darío; Wangüemert-Pérez, Gonzalo; Cheben, Pavel; Molina-Fernández, Íñigo; Ortega-Moñux, Alejandro; Halir, Robert
Future optical networks call for flexible, high performance and low cost coherent optical receivers. We present here several advances towards such receivers, including integrated optical couplers with ultra-broad bandwidth, as well as novel reception techniques and architectures that will enable high performance coherent reception without filtering and polarization splitting elements.
Buyl, Aafke; Housen, Alex
This study takes a new look at the topic of developmental stages in the second language (L2) acquisition of morphosyntax by analysing receptive learner data, a language mode that has hitherto received very little attention within this strand of research (for a recent and rare study, see Spinner, 2013). Looking at both the receptive and productive…
The author investigated whether hypermnesia would occur with auditory input. In addition, the author examined the effects of subjects' knowledge that they would later be asked to recall the stimuli. Two groups of 26 subjects each were given three successive recall trials after they listened to an audiotape of 59 high-imagery nouns. The subjects in the uninformed group were not told that they would later be asked to remember the words; those in the informed group were. Hypermnesia was evident, but only in the uninformed group. PMID:1447564
J Gordon Millichap
Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.
Concetta Chiara Cannella
Full Text Available After the description of the main migration routes toward Italian territory, the article provides an overview of the laws and administrative policy instruments that characterize the system of reception and detention of migrants in Italy. This type of information can help psychosocial workers supporting migrants to better cope with various psychosocial issues, such as the landing in a foreign country. Following a report on the first reception intervention carried out in Palermo, Sicily, by Psicologi per i Popoli – Sicilia, some reflections about the strengths and weaknesses identified as well as the potential for a greater involvement of psychosocial teams in immigrants reception and detention processes are presented. In fact, psychological science may improve the quality and effectiveness of the emergency services provided to migrants and be useful both in the training of workers and in crisis and emergency risk communication, with particular reference to risk perception about infectious diseases. However, the “added value” of psychological intervention might remain concealed and its usefulness may appear unimpressive. For this reason the papers suggests some principles through which psychology can contribute to processes of inclusiveness within a multicultural society and promote the acknowledgement of its own role in the field of humanitarian intervention.
Carlile, Simon; Leung, Johahn
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss. PMID:26590050
Salyer, Terry Ray
The laser differential interferometer is a high sensitivity (lambda/13,000 minimum detectable wavelength shift), large bandwidth (6 MHz), nonintrusive instrument ideal for low-density optical flow diagnostics. Up to one half wavelength shifts are possible with active phase compensation. With feedback control, a phase modulator stabilizes the system within the linear range. Calibrated receptivity experiments are performed in a Mach 4 quiet-flow Ludwieg tube. Laser-generated thermal spots are used as repeatable, controlled perturbations to the subsonic region behind the bow shock of both a hemispherical nose and a forward-facing cavity. Thermal spot amplitudes, spatial characteristics, and repeatability are measured. Both on-axis and off axis surveys of the subsonic region indicate damped oscillations with both blunt nose configurations. With the forward-facing cavity, a characteristic frequency based on the cavity geometry is detected. The results from both configurations correlate with nose-mounted and cavity base-mounted pressure transducer measurements, and thus remove frequency ambiguity from the pressure transducer experiments. High speed synchronous schlieren images show the thermal spot evolution and impingement at the hemispherical nose. Additionally, the thermal spot in freestream is modeled based on the experimental measurements. Quantitative comparisons with CFD simulations of these experiments show similar characteristics. CFD agreement is expected to improve with future use of the advanced thermal spot model.
Hall, J; Hubbard, A; Neely, S; Tubis, A
How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft . Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...
Full Text Available The role of early auditory processing may be to extract some elementary features from an acoustic mixture in order to organize the auditory scene. To accomplish this task, the central auditory system may rely on the fact that sensory objects are often composed of spectral edges, i.e. regions where the stimulus energy changes abruptly over frequency. The processing of acoustic stimuli may benefit from a mechanism enhancing the internal representation of spectral edges. While the visual system is thought to rely heavily on this mechanism (enhancing spatial edges, it is still unclear whether a related process plays a significant role in audition. We investigated the cortical representation of spectral edges, using acoustic stimuli composed of multi-tone pips whose time-averaged spectral envelope contained suppressed or enhanced regions. Importantly, the stimuli were designed such that neural responses properties could be assessed as a function of stimulus frequency during stimulus presentation. Our results suggest that the representation of acoustic spectral edges is enhanced in the auditory cortex, and that this enhancement is sensitive to the characteristics of the spectral contrast profile, such as depth, sharpness and width. Spectral edges are maximally enhanced for sharp contrast and large depth. Cortical activity was also suppressed at frequencies within the suppressed region. To note, the suppression of firing was larger at frequencies nearby the lower edge of the suppressed region than at the upper edge. Overall, the present study gives critical insights into the processing of spectral contrasts in the auditory system.
Full Text Available Auditory Prostheses (AP are widely used electronic devices for patients suffering with severe to profound senosorineural deafness by electrically stimulating the auditory nerve using an electrode array surgically placed in the inner ear. AP mainly contains external Body Worn Speech Processor (BWSP and internal Implantable Receiver Stimulator (IRS. BWSP receives an external sound or speech and generates encoded speech data bits for transmission to IRS via radio frequency transcutaneous link for excitation of electrode array. After surgical placement electrode array in the inner ear, BWSP should be fine tuned to achieve the 80-100% speech reception abilities of patient by an audiologist. Problem statement: Basic objective of this research was to develop a simple personal computer based user friendly hardware and software interface to fine tune the BWSP to achieve the best possible speech reception abilities of each individual patient. Approach: Tuning process involved several tasks such as identifying the active electrode contacts, determination of detection and pain thresholds of each active electrode and loads these values into BWSP by reprogramming the BWSP. This study contracted with development of easy and simple user friendly hardware and software interface for audiologist to perform post operation tuning procedures. A microcontroller based impedance telemetry with bidirectional RF transceiver was developed as a hardware interface between PC and IRS. The clinical programming software was developed using VB.NET 2008 to perform the post-operative tuning procedures such as (i impedance measurement, (ii fitting to determine the threshold and comfort levels for each active electrodes and (iii reprogramming the speech processor. Results: Simple hardware and software interfaces for audiologist were constructed and tested with laboratory model BWSP and IRS using simulated resistance electrode array. All the functional aspects were tested and results
Full Text Available In this essay I propose a theoretical assemblage integrating several discursive perspectives towards audience reception in the context of new media art creation, with a focus on sonic works. After reviewing the historical origins of reception theory in reader response and its later appropriation by communication and cultural studies, I argue that a mixed discursive perspective offers a potential refinement of contemporary reception theory as applicable to new media production, in which technological abstractions and complexities may be rich for purposes of production, but fall short in appreciation and communicative value for an audience
This paper examines the exercise of discretion by casualty reception staff, focussing on the problems of accountability that arise when their judgements help shape the process of patient categorization that culminates in clinical diagnosis. Rules and guidelines which ostensibly relate to bureaucratic objectives, are applied in ways which reflect situational exigencies of reception work, and values embedded in organisational culture. But reception staff are reluctant to acknowledge the importance of their decisions, and, particularly where judgements relate to patient condition, present rule-use as a straightforward and certain activity in which interpretation plays little part. PMID:10304220
Full Text Available This paper provides an overview of Greek historical writing of the Middle Byzantine period (approx. 800 until 1000 A.D., with a particular focus on the major chronicles, such as Theophanesthe Confessor (early 9th c., George the Monk (probably late 9th c., and Symeon the Logothete (second half of the 10th c.. On the one hand, it is discussed how the chroniclers engage with tradition and either accept it or reject it. Acceptance of tradition is illustrated by many cases where chroniclers keep very close to the narrative modes of their predecessors and in particular where they copy them extensively. Rejection of, or at least deviation from tradition is illustrated by many cases where new narrative techniques and modes of expression are apparent. Particular attention is paid to some aspects of narrative technique which seem to be innovative. In short, there seems to be an increased tendency towards greater logical (and hence, narrative coherence in the chronicles and an increased tendency towards concentration on a small number of settings, issues and persons (in particular, there is an increased concentration on the Capital of Constantinople and the Emperor’s person. Further, reception is discussed, and especially how Middle Byzantine historical texts were read and used in later writings, including the Slavic literatures. The need for further research in order to understand the transmission processes, especially in the form of the philological study of manuscripts, is stressed.
Full Text Available Citizenship involves being able to speak and be heard as a member of the community. This can be a formal right (e.g., a right to vote. It can also be something experienced in everyday life. However, the criteria for being judged a fellow member of the community are multiple and accorded different weights by different people. Thus, although one may self-define alongside one’s fellows, the degree to which these others reciprocate depends on the weight they give to various membership criteria. This suggests we approach everyday community membership in terms of an identity claims-making process in which first, an individual claims membership through invoking certain criteria of belonging, and second, others evaluate that claim. Pursuing this logic we report three experiments investigating the reception of such identity-claims. Study 1 showed that in Scotland a claim to membership of the national ingroup was accepted more if couched in terms of place of birth and ancestry rather than just in terms of one’s subjective identification. Studies 2 and 3 showed that this differential acceptance mattered for the claimant’s ability to be heard as a community member. We discuss the implications of these studies for the conceptualization of community membership and the realization of everyday citizenship rights.
Martinson, Eric; Brock, Derek
Effective communication with a mobile robot using speech is a difficult problem even when you can control the auditory scene. Robot self-noise or ego noise, echoes and reverberation, and human interference are all common sources of decreased intelligibility. Moreover, in real-world settings, these problems are routinely aggravated by a variety of sources of background noise. Military scenarios can be punctuated by high decibel noise from materiel and weaponry that would easily overwhelm a robot's normal speaking volume. Moreover, in nonmilitary settings, fans, computers, alarms, and transportation noise can cause enough interference to make a traditional speech interface unusable. This work presents and evaluates a prototype robotic interface that uses perspective taking to estimate the effectiveness of its own speech presentation and takes steps to improve intelligibility for human listeners. PMID:23096077
Föcker, J.; Hötting, K.; Gondan, Matthias;
Behavioral and event-related potential (ERP) studies have shown that spatial attention is gradually distributed around the center of the attentional focus. The present study compared uni- and crossmodal gradients of spatial attention to investigate whether the orienting of auditory and visual...... spatial attention is based on modality specific or supramodal representations of space. Auditory and visual stimuli were presented from five speaker locations positioned in the right hemifield. Participants had to attend to the innermost or outmost right position in order to detect either visual or...... auditory deviant stimuli. Detection rates and event-related potentials (ERPs) indicated that spatial attention is distributed as a gradient. Unimodal spatial ERP gradients correlated with the spatial resolution of the modality. Crossmodal spatial gradients were always broader than the corresponding...
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. PMID:26541581
Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen
Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-aud...
Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
King, Andrew J.; Parsons, Carl H.; Moore, David R.
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
... free publications Find organizations Related Topics Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick ... NIH… Turning Discovery Into Health ® National Institute on Deafness and Other Communication Disorders 31 Center Drive, MSC ...
... and school. A positive, realistic attitude and healthy self-esteem in a child with APD can work wonders. And kids with APD can go on to ... Parents MORE ON THIS TOPIC Auditory Processing Disorder Special ...
National Aeronautics and Space Administration — Invocon proposes the Surface-borne Time-Of-Reception Measurements (STORM) system as a method to locate the position of lightning strikes on aerospace vehicles....
Olsen, Michel; Andersen, Jens Kristian
Kommentar af fund af nogle hidtil oversete omtaler af Holberg i det 18. århundredes Frankrig. Disse nuancerer det billede vi gør os af en helt negativ fransk Holberg-reception. Udgivelsesdato: 2009...
Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only
正On the evening of September 21, 2012, a new moon hangs over the newly lit streetlights. The Mid-Autumn reception for the year of the Dragon was held by Chinese Ministry of Culture in the National Center for the Performing Arts, which is located to the west of Tiananmen Square. More than 500 distinguished guests attended the reception, including diplomatic envoys and cultural
Denise Cogo; Liliane Dutra Brignol
In this article we propose an itinerary for thinking about social networks as environments and the search for placing its incidences in reception studies on internet. We start from the understanding that the role developed by social networks in the organization of contemporaneous relations brings consequences for the configuration and uses of media, highlighting the internet. This demands a reconfiguration of the perspective on reception processes. In this text we discuss conceptual concepts ...
This works analyzes the presence of Brazilian Cinema Novo in Portugal, during the 1960s and 1970s, from the review of texts published in magazines and newspapers. Here, those texts are seen as traces of historical reception of films, important for dissemination of the Cinema Novo movement and even legitimacy on Portuguese territory. The analysis of these texts has shown that the excellent reception from the press to Cinema Novo shaped the program of the Portuguese movie criticism that support...
Abdul Aziz Mat Isa; Adnan Yusoff
The balanced development of the overall personality of human beings can be reached through the process of education. As a whole, higher education institutions must hold the important role of establishing not just an academic priorities nevertheless building a values and a new culture in the society. The aim of this paper is to study the nature of the undergraduate students on their reception towards values. More specifically, this paper carries out a survey on Muslim students’ reception of th...
Cyntia Marconato; Eva Cantalejo Munhoz; Marcia Maria Menim; Maria Thereza Albach
OBJECTIVE: To investigate the effects of receptive music therapy in clinical practice. METHODS: Receptive music therapy was individually applied via musical auditions, including five stages: musical stimulation, sensation, situation, reflection, and behavioral alteration. Following anamnesis and obtainment of consent, patients answered a first questionnaire on health risk evaluation (Q1), and after participating in 16 weekly music therapy sessions, answered a second one (Q2). RESULTS: Two men...
Petersson, P; Holmer, M; Breslin, T; Granmo, M; Schouenborg, J
The paper describes a computerized method, termed receptive field imaging (RFI), for the rapid mapping of multiple receptive fields and their respective sensitivity distributions. RFI uses random stimulation of multiple sites, in combination with an averaging procedure, to extract the relative contribution from each of the stimulated sites. Automated multi-electrode stimulation and recording, with spike detection and counting, are performed on-line by the RFI programme. Direct user interpretation of receptive field changes is made possible by a user-friendly graphic interface. A series of imaging experiments was carried out to evaluate the functional capacity of the system. RFI was tested on the receptive fields in the nociceptive withdrawal reflex (NWR) system in the rat. RFI replicates the results obtained with conventional methods and allows the display of receptive field dynamics induced by topical spinal cord application of morphine and naloxone on a minute-to-minute time scale. Data variance was estimated, and proved to be small enough to yield a stable representation of the receptive field, thereby achieving a high sensitivity in dynamic imaging experiments. The large number of stimulation and registration sites that can be monitored in parallel permits detailed network analysis of synaptic sets, corresponding to 'connection weights' between individual neurones. PMID:11164238
Werner-Reiss, Uri; Jennifer M Groh
Is sound location represented in the auditory cortex of humans and monkeys? Human neuroimaging experiments have had only mixed success at demonstrating sound location sensitivity in primary auditory cortex. This is in apparent conflict with studies in monkeys and other animals, where single-unit recording studies have found stronger evidence for spatial sensitivity. Does this apparent discrepancy reflect a difference between humans and animals, or does it reflect differences in the sensitivit...
Mukai, Ryan; Vilnrotter, Victor
A microwave aeronautical-telemetry receiver system includes an antenna comprising a seven-element planar array of receiving feed horns centered at the focal point of a paraboloidal dish reflector that is nominally aimed at a single aircraft or at multiple aircraft flying in formation. Through digital processing of the signals received by the seven feed horns, the system implements a method of enhanced cancellation of interference, such that it becomes possible to receive telemetry signals in the same frequency channel simultaneously from either or both of two aircraft at slightly different angular positions within the field of view of the antenna, even in the presence of multipath propagation. The present system is an advanced version of the system described in Spatio- Temporal Equalizer for a Receiving-Antenna Feed Array NPO-43077, NASA Tech Briefs, Vol. 34, No. 2 (February 2010), page 32. To recapitulate: The radio-frequency telemetry signals received by the seven elements of the array are digitized, converted to complex baseband form, and sent to a spatio-temporal equalizer that consists mostly of a bank of seven adaptive finite-impulse-response (FIR) filters (one for each element in the array) plus a unit that sums the outputs of the filters. The combination of the spatial diversity of the feedhorn array and the temporal diversity of the filter bank affords better multipath suppression performance than is achievable by means of temporal equalization alone. The FIR filter bank adapts itself in real time to enable reception of telemetry at a low bit error rate, even in the presence of frequency-selective multipath propagation like that commonly found at flight-test ranges. The combination of the array and the filter bank makes it possible to constructively add multipath incoming signals to the corresponding directly arriving signals, thereby enabling reductions in telemetry bit-error rates.
Lotto, Andrew; Holt, Lori
Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. PMID:26302301
Todd, Travis P.; Mehlman, Max L.; Keene, Christopher S.; DeAngeli, Nicole E.; Bucci, David J.
The retrosplenial cortex (RSC) has a well-established role in contextual and spatial learning and memory, consistent with its known connectivity with visuo-spatial association areas. In contrast, RSC appears to have little involvement with delay fear conditioning to an auditory cue. However, all previous studies have examined the contribution of…
Braga, Rodrigo M.; Fu, Richard Z.; Seemungal, Barry M.; Wise, Richard J. S.; Leech, Robert
The neural mechanisms supporting auditory attention are not fully understood. A dorsal frontoparietal network of brain regions is thought to mediate the spatial orienting of attention across all sensory modalities. Key parts of this network, the frontal eye fields (FEF) and the superior parietal lobes (SPL), contain retinotopic maps and elicit saccades when stimulated. This suggests that their recruitment during auditory attention might reflect crossmodal oculomotor processes; however this has not been confirmed experimentally. Here we investigate whether task-evoked eye movements during an auditory task can predict the magnitude of activity within the dorsal frontoparietal network. A spatial and non-spatial listening task was used with on-line eye-tracking and functional magnetic resonance imaging (fMRI). No visual stimuli or cues were used. The auditory task elicited systematic eye movements, with saccade rate and gaze position predicting attentional engagement and the cued sound location, respectively. Activity associated with these separate aspects of evoked eye-movements dissociated between the SPL and FEF. However these observed eye movements could not account for all the activation in the frontoparietal network. Our results suggest that the recruitment of the SPL and FEF during attentive listening reflects, at least partly, overt crossmodal oculomotor processes during non-visual attention. Further work is needed to establish whether the network’s remaining contribution to auditory attention is through covert crossmodal processes, or is directly involved in the manipulation of auditory information. PMID:27242465
Andersen, Tobias; Tiippana, K.; Laarni, J.;
recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre-attentive but...... from each of the faces and from the voice on the auditory speech percept. We found that directing visual spatial attention towards a face increased the influence of that face on auditory perception. However, the influence of the voice on auditory perception did not change suggesting that audiovisual...
Basner, Mathias; Babisch, Wolfgang; Davis, Adrian; Brink, Mark; Clark, Charlotte; Janssen, Sabine; Stansfeld, Stephen
Noise is pervasive in everyday life and can cause both auditory and non-auditory health effects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mechanisms involved in noise-induced hair-cell and nerve damage has substantially increased, and preventive and therapeutic drugs will probably become available within 10 years. Evidence of the non-auditory effects of environmental noise exposure on public health is growing. Observational and experimental studies have shown that noise exposure leads to annoyance, disturbs sleep and causes daytime sleepiness, affects patient outcomes and staff performance in hospitals, increases the occurrence of hypertension and cardiovascular disease, and impairs cognitive performance in schoolchildren. In this Review, we stress the importance of adequate noise prevention and mitigation strategies for public health. PMID:24183105
Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Kidd, Gary R; Watson, Charles S; Gygi, Brian
Performance on 19 auditory discrimination and identification tasks was measured for 340 listeners with normal hearing. Test stimuli included single tones, sequences of tones, amplitude-modulated and rippled noise, temporal gaps, speech, and environmental sounds. Principal components analysis and structural equation modeling of the data support the existence of a general auditory ability and four specific auditory abilities. The specific abilities are (1) loudness and duration (overall energy) discrimination; (2) sensitivity to temporal envelope variation; (3) identification of highly familiar sounds (speech and nonspeech); and (4) discrimination of unfamiliar simple and complex spectral and temporal patterns. Examination of Scholastic Aptitude Test (SAT) scores for a large subset of the population revealed little or no association between general or specific auditory abilities and general intellectual ability. The findings provide a basis for research to further specify the nature of the auditory abilities. Of particular interest are results suggestive of a familiar sound recognition (FSR) ability, apparently specialized for sound recognition on the basis of limited or distorted information. This FSR ability is independent of normal variation in both spectral-temporal acuity and of general intellectual ability. PMID:17614500
Noreña, A. J.; Eggermont, J. J.
The Zwicker tone (ZT) is defined as an auditory negative afterimage, perceived after the presentation of an appropriate inducer. Typically, a notched noise (NN) with a notch width of 1/2 octave induces a ZT with a pitch falling in the frequency range of the notch. The aim of the present study was to find potential neural correlates of the ZT in the primary auditory cortex of ketamine-anesthetized cats. Responses of multiunits were recorded simultaneously with two 8-electrode arrays during 1 s...
Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.
David ePérez-González; Malmierca, Manuel S.
The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the s...
Golden, HL; Agustus, JL; Nicholas, JM; Schott, JM; Crutch, SJ; L. Mancini; Warren, JD
Deficits of auditory scene analysis accompany Alzheimer's disease (AD). However, the functional neuroanatomy of spatial sound processing has not been defined in AD. We addressed this using a "sparse" fMRI virtual auditory spatial paradigm in 14 patients with typical AD in relation to 16 healthy age-matched individuals. Sound stimulus sequences discretely varied perceived spatial location and pitch of the sound source in a factorial design. AD was associated with loss of differentiated cortica...
Finocchietti, Sara; Cappagli, Giulia; Gori, Monica
The consequence of blindness on auditory spatial localization has been an interesting issue of research in the last decade providing mixed results. Enhanced auditory spatial skills in individuals with visual impairment have been reported by multiple studies, while some aspects of spatial hearing seem to be impaired in the absence of vision. In this study, the ability to encode the trajectory of a 2-dimensional sound motion, reproducing the complete movement, and reaching the correct end-point...
Romanski, L. M.; Tian, B.; Fritz, J.; Mishkin, M.; Goldman-Rakic, P. S.; Rauschecker, J. P.
‘What’ and ‘where’ visual streams define ventrolateral object and dorsolateral spatial processing domains in the prefrontal cortex of nonhuman primates. We looked for similar streams for auditory–prefrontal connections in rhesus macaques by combining microelectrode recording with anatomical tract-tracing. Injection of multiple tracers into physiologically mapped regions AL, ML and CL of the auditory belt cortex revealed that anterior belt cortex was reciprocally connected with the frontal pole (area 10), rostral principal sulcus (area 46) and ventral prefrontal regions (areas 12 and 45), whereas the caudal belt was mainly connected with the caudal principal sulcus (area 46) and frontal eye fields (area 8a). Thus separate auditory streams originate in caudal and rostral auditory cortex and target spatial and non-spatial domains of the frontal lobe, respectively. PMID:10570492
Full Text Available This works analyzes the presence of Brazilian Cinema Novo in Portugal, during the 1960s and 1970s, from the review of texts published in magazines and newspapers. Here, those texts are seen as traces of historical reception of films, important for dissemination of the Cinema Novo movement and even legitimacy on Portuguese territory. The analysis of these texts has shown that the excellent reception from the press to Cinema Novo shaped the program of the Portuguese movie criticism that support the ideals of a political and aesthetics avant-garde cinema.
Bo Liang; Weibiao Chen
The principle of band-limited space optical communication system using the techniques of space diversity methods and time domain Rake receiver is analyzed. The joint channel equalizer method combining diversity reception and equalization technique is presented in space laser communication. By computer simulation, the bit error rates of noncoherent pace optical on-off keying signal using different space diversity methods, Rake reception with different inter-symbol interferences, joint diversity equalizations with different signal noise rates and different channel numbers are analysed. The results identify that joint diversity equalization method can enhance space optical communication erformance evidently.
In the last few years, functional Magnetic Resonance Imaging (fMRI) has been widely accepted as an effective tool for mapping brain activities in both the sensorimotor and the cognitive field. The present work aims to assess the possibility of using fMRI methods to study the cortical response to different acoustic stimuli. Furthermore, we refer to recent data collected at Frankfurt University on the cortical pattern of auditory hallucinations. Healthy subjects showed broad bilateral activation, mostly located in the transverse gyrus of Heschl. The analysis of the cortical activation induced by different stimuli has pointed out a remarkable difference in the spatial and temporal features of the auditory cortex response to pulsed tones and pure tones. The activated areas during episodes of auditory hallucinations match the location of primary auditory cortex as defined in control measurements with the same patients and in the experiments on healthy subjects. (authors)
Stojanovik, Vesna; Riddell, Patricia
Despite ample research into the language skills of children with specific reading disorder no studies so far have investigated whether there may be a difference between expressive and receptive language skills in this population. Yet, neuro-anatomical models would predict that children who have specific reading disorder which is not associated…
The purpose of the present study was to understand the reciprocal, bidirectional longitudinal relation between joint book reading and English receptive vocabulary. To address the research goals, a nationally representative sample of Head Start children, the Head Start Family and Child Experiences Survey (2003 cohort), was used for analysis. The…
Full Text Available The study is devoted to the substantiation of the algorithm of corona reception of judo veteran competitive activity formation, each of its steps. Purposeful formation of judoka technical actions individual arsenal using the proposed algorithm (7 stages is implemented on the basis of the identification, a subsequent in-depth development and improvement of the best techniques.
Carter, B Elijah; Conn, Caitlin C; Wiles, Jason R
Due to a phenomenon known as the 'backfire effect', intuition-based opinions can be inadvertently strengthened by evidence-based counterarguments. Students' views on genetically modified organisms (GMOs) may be subject to this effect. We explored the impact of an empathetically accessible topic, world hunger, on receptivity to GMO technology as an alternative to direct evidence-based approaches. PMID:27246454
Gasparini, Clelia; Andreatta, Gabriele; Pilastro, Andrea
The females of several internal fertilizers are able to store sperm for a long time, reducing the risk of sperm limitation. However, it also means that males can attempt to mate outside females' receptive period, potentially increasing the level of sperm competition and exacerbating sexual conflict over mating. The guppy ( Poecilia reticulata), an internally fertilizing fish, is a model system of such competition and conflict. Female guppies accept courtship and mate consensually only during receptive periods of the ovarian cycle but receive approximately one (mostly forced) mating attempt per minute both during and outside their sexually receptive phase. In addition, females can store viable sperm for months. We expected that guppy females would disfavour sperm received during their unreceptive period, possibly by modulating the quality and/or quantity of the components present in the ovarian fluid (OF) over the breeding cycle. Ovarian fluid has been shown to affect sperm velocity, a determinant of sperm competition success in this and other fishes. We found that in vitro sperm velocity is slower in OF collected from unreceptive females than in OF from receptive females. Visual stimulation with a potential partner prior to collection did not significantly affect in vitro sperm velocity. These results suggest that sperm received by unreceptive females may be disfavoured as sperm velocity likely affects the migration process and the number of sperm that reach storage sites.
Tran, Thu Hoang
This paper is concerned with research in measuring receptive and productive vocabulary knowledge in second language (L2) learning, including English as a second language (ESL) learning and English as a foreign language (EFL) learning. The paper will begin with a brief introduction to the role of vocabulary in language learning, and then an…
Bialystok, Ellen; Luk, Gigi
English receptive vocabulary scores from 797 monolingual and 808 bilingual participants between the ages of 17 and 89 years old were aggregated from 20 studies to compare standard scores across language groups. The distribution of scores was unimodal for both groups but the mean score was significantly different, with monolinguals obtaining higher…
Andrés Canga Alonso
Full Text Available This paper responds to the need of research on vocabulary knowledge in foreign language in secondary education in Spain. Thus, this research aims at investigating (i the receptive vocabulary knowledge of 49 girls and 43 boys, Spanish students learning English as a foreign language in a secondary school located in the north of Spain, and (ii its pedagogical implications for students’ understanding of written and spoken discourse in English (Adolphs & Schmitt 2004; Laufer 1992, 1997; Nation 2001. We used the 2,000 frequency band of the Vocabulary Level Test (VLT (Schmitt, Schmitt & Clapham, 2001, version 2 as the instrument to measure students’ receptive vocabulary knowledge. Our results reveal that the means of girls’ receptive vocabulary size is below 1,000 words, which agrees with the estimates proposed by López-Mezquita (2005 for Spanish students of the same age and educational level. On the contrary, the means for boys is slightly above 1,000 words, being the differences between boys’ and girls’ performance in the VLT statistically relevant. Our data also indicate that most of the students analysed in the present study could have problems to understand written and spoken discourse due to their low scores in the receptive vocabulary level test.
The aim of this paper is to describe Dewey's reception in the Spanish-speaking countries that constitute the Hispanic world. Without any doubt, it can be said that in the past century Spain and the countries of South America have been a world apart, lagging far behind the mainstream Western world. It includes a number of names and facts about the…
Adachi-Mejia, Anna M.; Sutherland, Lisa A.; Longacre, Meghan R.; Beach, Michael L.; Titus-Ernstoff, Linda; Gibson, Jennifer J.; Dalton, Madeline A.
Objective: This study examined the relationship between adolescent weight status and food advertisement receptivity. Design: Survey-based evaluation with data collected at baseline (initial and at 2 months), and at follow-up (11 months). Setting: New Hampshire and Vermont. Participants: Students (n = 2,281) aged 10-13 in 2002-2005. Main Outcome…
Ungerer, Judy A.; Sigman, Marian
Assessment of category knowledge and receptive language skills of 16 autistic (3-6 years old), mentally retarded, and normal children indicated that the autistic children's knowledge of function, form, and color categories was comparable to that of the mental-age-matched mentally retarded and normal comparison groups. (Author/DB)
Grigorescu, Cosmin; Petkov, Nicolai; Westenberg, Michel A.
We propose a biologically motivated computational step, called nonclassical receptive field (non-CRF) inhibition, more generally surround inhibition or suppression, to improve contour detection in machine vision. Non-CRF inhibition is exhibited by 80% of the orientation-selective neurons in the prim
Inagaki, Mikio; Sasaki, Kota S; Hashimoto, Hajime; Ohzawa, Izumi
Neurons in the middle temporal (MT) visual area are thought to represent the velocity (direction and speed) of motion. Previous studies suggest the importance of both excitation and suppression for creating velocity representation in MT; however, details of the organization of excitation and suppression at the MT stage are not understood fully. In this article, we examine how excitatory and suppressive inputs are pooled in individual MT neurons by measuring their receptive fields in a three-dimensional (3-D) spatiotemporal frequency domain. We recorded the activity of single MT neurons from anesthetized macaque monkeys. To achieve both quality and resolution of the receptive field estimations, we applied a subspace reverse correlation technique in which a stimulus sequence of superimposed multiple drifting gratings was cross-correlated with the spiking activity of neurons. Excitatory responses tended to be organized in a manner representing a specific velocity independent of the spatial pattern of the stimuli. Conversely, suppressive responses tended to be distributed broadly over the 3-D frequency domain, supporting a hypothesis of response normalization. Despite the nonspecific distributed profile, the total summed strength of suppression was comparable to that of excitation in many MT neurons. Furthermore, suppressive responses reduced the bandwidth of velocity tuning, indicating that suppression improves the reliability of velocity representation. Our results suggest that both well-organized excitatory inputs and broad suppressive inputs contribute significantly to the invariant and reliable representation of velocity in MT. PMID:27193321
Guilbert, Alma; Clément, Sylvain; Senouci, Latifa; Pontzeele, Sylvain; Martin, Yves; Moroni, Christine
Although visual deficits due to unilateral spatial neglect (USN) have been frequently described in the literature, fewer studies have been interested in directional hearing impairment in USN. The aim of this study was to explore sound lateralisation deficits in USN. Using a paradigm inspired by Tanaka et al. (1999), interaural time differences (ITD) were presented over headphones to give the illusion of a leftward or a rightward movement of sound. Participants were asked to respond "right" and "left" as soon as possible to indicate whether they heard the sound moving to the right or to the left side of the auditory space. We additionally adopted a single-case method to analyse the performance of 15 patients with right-hemisphere (RH) stroke and added two additional measures to underline sound lateralisation on the left side and on the right side. We included 15 patients with RH stoke (5 with a severe USN, 5 with a mild USN and 5 without USN) and 11 healthy age-matched participants. We expected to replicate findings of abnormal sound lateralisation in USN. However, although a sound lateralisation deficit was observed in USN, two different deficit profiles were identified. Namely, patients with a severe USN seemed to have left sound lateralisation impairment whereas patients with a mild USN seemed to be more influenced by a systematic bias in auditory representation with respect to body meridian axis (egocentric deviation). This latter profile was unexpected as sounds were manipulated with ITD and, thus, would not be perceived as coming from an external source of the head. Future studies should use this paradigm in order to better understand these two distinct profiles. PMID:27018451
Eriksson Barajas, Katarina
The study of mainstream consumers of fiction is still limited, as is research of naturalistic reading situations. In this paper I argue that a combination of reception theory and discursive psychology – discursive reception research – can be a fruitful method for empirical literary studies. Reception theory gains both a way to adequately analyze conversations about literature (and other aesthetic products), and the opportunity to study how the reception is done and how literature is used, whi...
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Frey, Aline; Aramaki, Mitsuko; Besson, Mireille
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. PMID:24378910
Beal-Alvarez, Jennifer S.
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…
King, A J
The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs. PMID:8240794
King, A J.; Schnupp, J W.H.; Doubell, T P.
In order to pinpoint the location of a sound source, we make use of a variety of spatial cues that arise from the direction-dependent manner in which sounds interact with the head, torso and external ears. Accurate sound localization relies on the neural discrimination of tiny differences in the values of these cues and requires that the brain circuits involved be calibrated to the cues experienced by each individual. There is growing evidence that the capacity for recalibrating auditory localization continues well into adult life. Many details of how the brain represents auditory space and of how those representations are shaped by learning and experience remain elusive. However, it is becoming increasingly clear that the task of processing auditory spatial information is distributed over different regions of the brain, some working hierarchically, others independently and in parallel, and each apparently using different strategies for encoding sound source location. PMID:11390297
Chuen, Lorraine; Sears, David; McAdams, Stephen
A comprehensive characterization of autonomic and somatic responding within the auditory domain is currently lacking. We studied whether simple types of auditory change that occur frequently during music listening could elicit measurable changes in heart rate, skin conductance, respiration rate, and facial motor activity. Participants heard a rhythmically isochronous sequence consisting of a repeated standard tone, followed by a repeated target tone that changed in pitch, timbre, duration, intensity, or tempo, or that deviated momentarily from rhythmic isochrony. Changes in all parameters produced increases in heart rate. Skin conductance response magnitude was affected by changes in timbre, intensity, and tempo. Respiratory rate was sensitive to deviations from isochrony. Our findings suggest that music researchers interpreting physiological responses as emotional indices should consider acoustic factors that may influence physiology in the absence of induced emotions. PMID:26927928
Jones, D M; Hughes, Rob; Macken, W.J.
One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the seria...
韩宇花; 陶希; 邓景贵; 刘佳; 宋涛; 何娟
目的：探讨基于环境重置的视听觉刺激对脑卒中偏侧忽略( hemispatial neglect, HSN)的影响。方法2010年3月-2012年9月收治的脑卒中HSN 49例随机分为观察组27例和对照组22例。两组均给予常规治疗,对照组对病房及康复环境不做要求,观察组对病房床位及康复环境进行重新设置。于治疗前、治疗4周及治疗8周时分别行直线二等分( LB)测试和线段划消( LC)测试评估HSN的程度,以美国国立研究院脑卒中评定量表( NIHSS)评定神经功能缺损和改良Barthel Index( MBI)评估日常生活活动能力( ADL)。结果治疗4、8周时两组LB、LC及NIHSS评分均低于治疗前,MBI评分均高于治疗前(P<0．05)。治疗8周时两组LB、NIHSS评分和观察组LC均较治疗4周时降低,MBI评分较治疗4周时升高,观察组LB、LC低于对照组,MBI评分高于对照组(P<0．05)。结论基于环境重置的视听觉刺激对脑卒中HSN患者有益,可提高ADL能力,但对神经功能缺损影响可能不大。%Objective To explore the effect of visual and auditory stimulation based on environment reset on he-mi-spatial neglect ( HSN) in patients with cerebral apoplexy. Methods A total of 49 patients with cerebral apoplexy combined with HSN during March 2010 and September 2012 were randomly divided into control group (n=22) and ob-servation group (n=27). Conventional therapy was performed in the two groups. Wards and rehabilitation environment for patients in control group had no special requirement, while wards and rehabilitation environment for patients were rese-ted regularly. HSN degrees were assessed by test of line bisection ( LB) and line cancellation ( LC);scores of neurologic impairment were evaluated with National Institute of Health stroke scale ( NIHSS) , and abilities of activity of daily living ( ADL) were evaluated with modified Barthel index ( MBI) before treatment, after treatment for 4 weeks and 8 weeks. Results Compared with those before
Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers. PMID:15940990
Grube, Manon; Kumar, Sukhbinder; Cooper, Freya E; Turton, Stuart; Griffiths, Timothy D
This work tests the relationship between auditory and phonological skill in a non-selected cohort of 238 school students (age 11) with the specific hypothesis that sound-sequence analysis would be more relevant to phonological skill than the analysis of basic, single sounds. Auditory processing was assessed across the domains of pitch, time and timbre; a combination of six standard tests of literacy and language ability was used to assess phonological skill. A significant correlation between general auditory and phonological skill was demonstrated, plus a significant, specific correlation between measures of phonological skill and the auditory analysis of short sequences in pitch and time. The data support a limited but significant link between auditory and phonological ability with a specific role for sound-sequence analysis, and provide a possible new focus for auditory training strategies to aid language development in early adolescence. PMID:22951739
Auditory displays are slower than graphical user interfaces. We believe spatial audio can change that. Human perception can localize the position of sound sources due to psychoacoustical cues. Spatial audio reproduces these cues to produce virtual sound source position by headphones. The spatial attribute of sound can be used to produce richer and more effective auditory displays. In this work, there is proposed a set of interaction design guidelines for the use of spatial audio displays i...
Knudsen, E I
The optic tectum of the barn owl (Tyto alba) contains a neural map of auditory space consisting of neurons that are sharply tuned for sound source location and organized precisely according to their spatial tuning. The importance of vision for the development of this auditory map was investigated by comparing space maps measured in normal owls with those measured in owls raised with both eyelids sutured closed. The results demonstrate that owls raised without sight, but with normal hearing, d...
K. Raja Kumar; P. Seetha Ramaiah
The Auditory Prosthesis (AP) is an electronic device that can provide hearing sensations to people who are profoundly deaf by stimulating the auditory nerve via an array of electrodes with an electric current allowing them to understand the speech. The AP system consists of two hardware functional units such as Body Worn Speech Processor (BWSP) and Receiver Stimulator. The prototype model of Receiver Stimulator for Auditory Prosthesis (RSAP) consists of Speech Data Decoder, DAC, ADC, constant...
Vitor E Valenti; Guida, Heraldo L.; Frizzo, Ana C F; Cardoso, Ana C. V.; Vanderlei, Luiz Carlos M; Luiz Carlos de Abreu
Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation bet...
Tillery, Kim L.; Katz, Jack; Keller, Warren D.
A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…
Julia A Mossbridge
Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
Mossbridge, Julia A; Grabowecky, Marcia; Suzuki, Satoru
Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements), it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization) across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR) was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment. PMID:24194873
Paul Hinckley Delano
Full Text Available The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body, inferior colliculus, cochlear nucleus and superior olivary complex reaching the cochlea through olivocochlear fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i colliculo-thalamic-cortico-collicular, (ii cortico-(collicular-olivocochlear and (iii cortico-(collicular-cochlear nucleus pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the olivocochlear reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on cochlear nucleus, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed.
Terreros, Gonzalo; Delano, Paul H
The auditory efferent system originates in the auditory cortex and projects to the medial geniculate body (MGB), inferior colliculus (IC), cochlear nucleus (CN) and superior olivary complex (SOC) reaching the cochlea through olivocochlear (OC) fibers. This unique neuronal network is organized in several afferent-efferent feedback loops including: the (i) colliculo-thalamic-cortico-collicular; (ii) cortico-(collicular)-OC; and (iii) cortico-(collicular)-CN pathways. Recent experiments demonstrate that blocking ongoing auditory-cortex activity with pharmacological and physical methods modulates the amplitude of cochlear potentials. In addition, auditory-cortex microstimulation independently modulates cochlear sensitivity and the strength of the OC reflex. In this mini-review, anatomical and physiological evidence supporting the presence of a functional efferent network from the auditory cortex to the cochlear receptor is presented. Special emphasis is given to the corticofugal effects on initial auditory processing, that is, on CN, auditory nerve and cochlear responses. A working model of three parallel pathways from the auditory cortex to the cochlea and auditory nerve is proposed. PMID:26483647
Almudena Fernández Fontecha
Full Text Available Content and Language Integrated Learning (CLIL is a widely researched approach to foreign language learning and teaching. One of the pillars of CLIL is the concept of motivation. Some studies have focused on exploring motivation within CLIL, however there has not been much discussion about the connection between motivation, or other affective factors, and each component of foreign language learning. Hence, given two groups of learners with the same hours of EFL instruction, the main objective of this research is to determine whether there exists any kind of interaction between the number of words learners know receptively and their motivation towards English as a Foreign Language (EFL. Most students in both groups were highly motivated. No relationship was identified between the receptive vocabulary knowledge and the general motivation for the secondary graders but a positive significant relationship was found for the primary CLIL graders. Several reasons will be adduced.
Jefferson Cleiton de Souza
Full Text Available The aim of this paper is to discuss how Hans Robert Jauss, the creator of the aesthetics of reception, has introduced the category of the reader into the literary studies especially when it comes to the importance of the reader to the understanding of the text, and to the history of a society and its literary system or, in other words to the way the formal elements of a literary work are organized and how they are related to aesthetic, ethic and moral evaluations. To do so, it is necessary to analyze how the conceptions of implied and actual reader, as well as of aesthetic experience, relate to questions of history, artistic communication and reception.
Lauritsen, Maj-Britt Glenn; Söderström, Margareta; Kreiner, Svend;
PURPOSE: We tested "the Galker test", a speech reception in noise test developed for primary care for Danish preschool children, to explore if the children's ability to hear and understand speech was associated with gender, age, middle ear status, and the level of background noise. METHODS......: The Galker test is a 35-item audio-visual, computerized word discrimination test in background noise. Included were 370 normally developed children attending day care center. The children were examined with the Galker test, tympanometry, audiometry, and the Reynell test of verbal comprehension. Parents...... to Reynell test scores (Gamma (G)=0.35), the children's age group (G=0.33), and the day care teachers' assessment of the children's vocabulary (G=0.26). CONCLUSIONS: The Galker test of speech reception in noise appears promising as an easy and quick tool for evaluating preschool children's understanding...