Peña, José Luis
The owl's auditory system computes interaural time (ITD) and interaural level (ILD) differences to create a two-dimensional map of auditory space. Space-specific neurons are selective for combinations of ITD and ILD, which define, respectively, the horizontal and vertical dimensions of their receptive fields. ITD curves for postsynaptic potentials indicate that ICx neurons integrate the results of binaural cross correlation in different frequency bands. However, the difference between the main and side peaks is slight. ICx neurons further enhance this difference in the process of converting membrane potentials to impulse rates. Comparison of subthreshold postsynaptic potentials (PSPs) and spike output for the same neurons showed that receptive fields measured in PSPs were much larger than those measured in spikes in both ITD and ILD dimensions. A multiplication of separate postsynaptic potentials tuned to ITD and ILD can account for the combination sensitivity of these neurons to ITD-ILD pairs.
Brian J Fischer
Full Text Available A multiplicative combination of tuning to interaural time difference (ITD and interaural level difference (ILD contributes to the generation of spatially selective auditory neurons in the owl's midbrain. Previous analyses of multiplicative responses in the owl have not taken into consideration the frequency-dependence of ITD and ILD cues that occur under natural listening conditions. Here, we present a model for the responses of ITD- and ILD-sensitive neurons in the barn owl's inferior colliculus which satisfies constraints raised by experimental data on frequency convergence, multiplicative interaction of ITD and ILD, and response properties of afferent neurons. We propose that multiplication between ITD- and ILD-dependent signals occurs only within frequency channels and that frequency integration occurs using a linear-threshold mechanism. The model reproduces the experimentally observed nonlinear responses to ITD and ILD in the inferior colliculus, with greater accuracy than previous models. We show that linear-threshold frequency integration allows the system to represent multiple sound sources with natural sound localization cues, whereas multiplicative frequency integration does not. Nonlinear responses in the owl's inferior colliculus can thus be generated using a combination of cellular and network mechanisms, showing that multiple elements of previous theories can be combined in a single system.
Wightman, Frederic L.; Jenison, Rick
All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.
Froemke, Robert C; Martins, Ana Raquel O
The nervous system must dynamically represent sensory information in order for animals to perceive and operate within a complex, changing environment. Receptive field plasticity in the auditory cortex allows cortical networks to organize around salient features of the sensory environment during postnatal development, and then subsequently refine these representations depending on behavioral context later in life. Here we review the major features of auditory cortical receptive field plasticity in young and adult animals, focusing on modifications to frequency tuning of synaptic inputs. Alteration in the patterns of acoustic input, including sensory deprivation and tonal exposure, leads to rapid adjustments of excitatory and inhibitory strengths that collectively determine the suprathreshold tuning curves of cortical neurons. Long-term cortical plasticity also requires co-activation of subcortical neuromodulatory control nuclei such as the cholinergic nucleus basalis, particularly in adults. Regardless of developmental stage, regulation of inhibition seems to be a general mechanism by which changes in sensory experience and neuromodulatory state can remodel cortical receptive fields. We discuss recent findings suggesting that the microdynamics of synaptic receptive field plasticity unfold as a multi-phase set of distinct phenomena, initiated by disrupting the balance between excitation and inhibition, and eventually leading to wide-scale changes to many synapses throughout the cortex. These changes are coordinated to enhance the representations of newly-significant stimuli, possibly for improved signal processing and language learning in humans. Copyright © 2011 Elsevier B.V. All rights reserved.
Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David
Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.
Arne Freerk Meyer
Full Text Available Temporal variability of neuronal response characteristics during sensory stimulation is a ubiquitous phenomenon that may reflect processes such as stimulus-driven adaptation, top-down modulation or spontaneous fluctuations. It poses a challenge to functional characterization methods such as the receptive field, since these often assume stationarity. We propose a novel method for estimation of sensory neurons' receptive fields that extends the classic static linear receptive field model to the time-varying case. Here, the long-term estimate of the static receptive field serves as the mean of a probabilistic prior distribution from which the short-term temporally localized receptive field may deviate stochastically with time-varying standard deviation. The derived corresponding generalized linear model permits robust characterization of temporal variability in receptive field structure also for highly non-Gaussian stimulus ensembles. We computed and analyzed short-term auditory spectro-temporal receptive field (STRF estimates with characteristic temporal resolution 5 s to 30 s based on model simulations and responses from in total 60 single-unit recordings in anesthetized Mongolian gerbil auditory midbrain and cortex. Stimulation was performed with short (100 ms overlapping frequency-modulated tones. Results demonstrate identification of time-varying STRFs, with obtained predictive model likelihoods exceeding those from baseline static STRF estimation. Quantitative characterization of STRF variability reveals a higher degree thereof in auditory cortex compared to midbrain. Cluster analysis indicates that significant deviations from the long-term static STRF are brief, but reliably estimated. We hypothesize that the observed variability more likely reflects spontaneous or state-dependent internal fluctuations that interact with stimulus-induced processing, rather than experimental or stimulus design.
One of the most common complaints of people with impaired hearing concerns their difficulty with understanding speech. Particularly in the presence of background noise, hearing-impaired people often encounter great difficulties with speech communication. In most cases, the problem persists even...... if reduced audibility has been compensated for by hearing aids. It has been hypothesized that part of the difficulty arises from changes in the perception of sounds that are well above hearing threshold, such as reduced frequency selectivity and deficits in the processing of temporal fine structure (TFS......) at the output of the inner-ear (cochlear) filters. The purpose of this work was to investigate these aspects in detail. One chapter studies relations between frequency selectivity, TFS processing, and speech reception in listeners with normal and impaired hearing, using behavioral listening experiments. While...
Qiu, Anqi; Schreiner, Christoph E; Escabí, Monty A
The spectro-temporal receptive field (STRF) is a model representation of the excitatory and inhibitory integration area of auditory neurons. Recently it has been used to study spectral and temporal aspects of monaural integration in auditory centers. Here we report the properties of monaural STRFs and the relationship between ipsi- and contralateral inputs to neurons of the central nucleus of cat inferior colliculus (ICC) of cats. First, we use an optimal singular-value decomposition method to approximate auditory STRFs as a sum of time-frequency separable Gabor functions. This procedure extracts nine physiologically meaningful parameters. The STRFs of approximately 60% of collicular neurons are well described by a time-frequency separable Gabor STRF model, whereas the remaining neurons exhibited obliquely oriented or multiple excitatory/inhibitory subfields that require a nonseparable Gabor fitting procedure. Parametric analysis reveals distinct spectro-temporal tradeoffs in receptive field size and modulation filtering resolution. Comparisons between an identical model used to study spatio-temporal integration areas of visual neurons further shows that auditory and visual STRFs share numerous structural properties. We then use the Gabor STRF model to compare quantitatively receptive field properties of contra- and ipsilateral inputs to the ICC. We show that most interaural STRF parameters are highly correlated bilaterally. However, the spectral and temporal phases of ipsi- and contralateral STRFs often differ significantly. This suggests that activity originating from each ear share various spectro-temporal response properties such as their temporal delay, bandwidth, and center frequency but have shifted or interleaved patterns of excitation and inhibition. These differences in converging monaural receptive fields expand binaural processing capacity beyond interaural time and intensity aspects and may enable colliculus neurons to detect disparities in the spectro
Walsh, Edward J.; Wang, Lily M.; Armstrong, Douglas L.; Curro, Thomas; Simmons, Lee G.; McGee, Joann
Acoustic communication represents a primary mode of interaction within the sub-species of Panthera tigris and it is commonly known that their vocal repertoire consists of a relatively wide range of utterances that include roars, growls, grunts, hisses and chuffling, vocalizations that are in some cases produced with extraordinary power. P. tigris vocalizations are known to contain significant amounts of acoustic energy over a wide spectral range, with peak output occurring in a low frequency bandwidth in the case of roars. von Muggenthaler (2000) has also shown that roars and other vocal productions uttered by P. tigris contain energy in the infrasonic range. While it is reasonable to assume that low and infrasonic acoustic cues are used as communication signals among conspecifics in the wild, it is clearly necessary to demonstrate that members of the P. tigris sub-species are responsive to low and infrasonic acoustic signals. The auditory brainstem response has proven to be an effective tool in the characterization of auditory performance among tigers and the results of an ongoing study of both the acoustical properties of P. tigris vocalizations and their auditory receptivity support the supposition that tigers are not only responsive to low frequency stimulation, but exquisitely so.
Whitehouse, Martha M.
The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.
Nguyen, Andy; Cabrera, Densil
Auditory spatial impression is widely studied for its contribution to auditorium acoustical quality. By contrast, visual spatial impression in auditoria has received relatively little attention in formal studies. This paper reports results from a series of experiments investigating the auditory and visual spatial impression of concert auditoria. For auditory stimuli, a fragment of an anechoic recording of orchestral music was convolved with calibrated binaural impulse responses, which had been made with the dummy head microphone at a wide range of positions in three auditoria and the sound source on the stage. For visual stimuli, greyscale photographs were used, taken at the same positions in the three auditoria, with a visual target on the stage. Subjective experiments were conducted with auditory stimuli alone, visual stimuli alone, and visual and auditory stimuli combined. In these experiments, subjects rated apparent source width, listener envelopment, intimacy and source distance (auditory stimuli), and spaciousness, envelopment, stage dominance, intimacy and target distance (visual stimuli). Results show target distance to be of primary importance in auditory and visual spatial impression-thereby providing a basis for covariance between some attributes of auditory and visual spatial impression. Nevertheless, some attributes of spatial impression diverge between the senses.
Keith P. Johnson; Lei Zhao; Daniel Kerschensteiner
The spike trains of retinal ganglion cells (RGCs) are the only source of visual information to the brain. Here, we genetically identify an RGC type in mice that functions as a pixel encoder and increases firing to light increments (PixON-RGC). PixON-RGCs have medium-sized dendritic arbors and non-canonical center-surround receptive fields. From their receptive field center, PixON-RGCs receive only excitatory input, which encodes contrast and spatial information linearly. From their receptive ...
Cappagli, Giulia; Gori, Monica
For individuals with visual impairments, auditory spatial localization is one of the most important features to navigate in the environment. Many works suggest that blind adults show similar or even enhanced performance for localization of auditory cues compared to sighted adults (Collignon, Voss, Lassonde, & Lepore, 2009). To date, the investigation of auditory spatial localization in children with visual impairments has provided contrasting results. Here we report, for the first time, that contrary to visually impaired adults, children with low vision or total blindness show a significant impairment in the localization of static sounds. These results suggest that simple auditory spatial tasks are compromised in children, and that this capacity recovers over time. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hayes, Heather; Geers, Ann E; Treiman, Rebecca; Moog, Jean Sachar
Deaf children with cochlear implants are at a disadvantage in learning vocabulary when compared with hearing peers. Past research has reported that children with implants have lower receptive vocabulary scores and less growth over time than hearing children. Research findings are mixed as to the effects of age at implantation on vocabulary skills and development. One goal of the current study is to determine how children with cochlear implants educated in an auditory-oral environment compared with their hearing peers on a receptive vocabulary measure in overall achievement and growth rates. This study will also investigate the effects of age at implant on vocabulary abilities and growth rates. We expect that the children with implants will have smaller vocabularies than their hearing peers but will achieve similar rates of growth as their implant experience increases. We also expect that children who receive their implants at young ages will have better overall vocabulary and higher growth rates than older-at-implant children. Repeated assessments using the Peabody Picture Vocabulary Test were given to 65 deaf children with cochlear implants who used oral communication, who were implanted under the age of 5 yr, and who attended an intensive auditory-oral education program. Multilevel modeling was used to describe overall abilities and rates of receptive vocabulary growth over time. On average, the deaf children with cochlear implants had lower vocabulary scores than their hearing peers. However, the deaf children demonstrated substantial vocabulary growth, making more than 1 yr's worth of progress in a year. This finding contrasts with those of previous studies of children with implants, which found lower growth rates. A negative quadratic trend indicated that growth decelerated with time. Age at implantation significantly affected linear and quadratic growth. Younger-at-implant children had steeper growth rates but more tapering off with time than children
Full Text Available The auditory system of adult listeners has been shown to accommodate to altered spectral cues to sound location which presumably provides the basis for recalibration to changes in the shape of the ear over a life time. Here we review the role of auditory and non-auditory inputs to the perception of sound location and consider a range of recent experiments looking at the role of non-auditory inputs in the process of accommodation to these altered spectral cues. A number of studies have used small ear moulds to modify the spectral cues that result in significant degradation in localization performance. Following chronic exposure (10-60 days performance recovers to some extent and recent work has demonstrated that this occurs for both audio-visual and audio-only regions of space. This begs the questions as to the teacher signal for this remarkable functional plasticity in the adult nervous system. Following a brief review of influence of the motor state in auditory localisation, we consider the potential role of auditory-motor learning in the perceptual recalibration of the spectral cues. Several recent studies have considered how multi-modal and sensory-motor feedback might influence accommodation to altered spectral cues produced by ear moulds or through virtual auditory space stimulation using non-individualised spectral cues. The work with ear moulds demonstrates that a relatively short period of training involving sensory-motor feedback (5 – 10 days significantly improved both the rate and extent of accommodation to altered spectral cues. This has significant implications not only for the mechanisms by which this complex sensory information is encoded to provide a spatial code but also for adaptive training to altered auditory inputs. The review concludes by considering the implications for rehabilitative training with hearing aids and cochlear prosthesis.
Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten
In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.
Parks, Anthony J.
How do listener head rotations affect auditory perception of elevation? This investi-. gation addresses this in the hopes that perceptual judgments of elevated auditory. percepts may be more thoroughly understood in terms of dynamic listening cues. engendered by listener head rotations and that this phenomenon can be psychophys-. ically and computationally modeled. Two listening tests were conducted and a. psychophysical model was constructed to this end. The frst listening test prompted. listeners to detect an elevated auditory event produced by a virtual noise source. orbiting the median plane via 24-channel ambisonic spatialization. Head rotations. were tracked using computer vision algorithms facilitated by camera tracking. The. data were used to construct a dichotomous criteria model using factorial binary. logistic regression model. The second auditory test investigated the validity of the. historically supported frequency dependence of auditory elevation perception using. narrow-band noise for continuous and brief stimuli with fxed and free-head rotation. conditions. The data were used to construct a multinomial logistic regression model. to predict categorical judgments of above, below, and behind. Finally, in light. of the psychophysical data found from the above studies, a functional model of. elevation perception for point sources along the cone of confusion was constructed. using physiologically-inspired signal processing methods along with top-down pro-. cessing utilizing principles of memory and orientation. The model is evaluated using. white noise bursts for 42 subjects' head-related transfer functions. The investigation. concludes with study limitations, possible implications, and speculation on future. research trajectories.
Menning, Hans; Ackermann, Hermann; Hertrich, Ingo; Mathiak, Klaus
Previous studies have shown that cross-modal processing affects perception at a variety of neuronal levels. In this study, event-related brain responses were recorded via whole-head magnetoencephalography (MEG). Spatial auditory attention was directed via tactile pre-cues (primes) to one of four locations in the peripersonal space (left and right hand versus face). Auditory stimuli were white noise bursts, convoluted with head-related transfer functions, which ensured spatial perception of the four locations. Tactile primes (200-300 ms prior to acoustic onset) were applied randomly to one of these locations. Attentional load was controlled by three different visual distraction tasks. The auditory P50m (about 50 ms after stimulus onset) showed a significant "proximity" effect (larger responses to face stimulation as well as a "contralaterality" effect between side of stimulation and hemisphere). The tactile primes essentially reduced both the P50m and N100m components. However, facial tactile pre-stimulation yielded an enhanced ipsilateral N100m. These results show that earlier responses are mainly governed by exogenous stimulus properties whereas cross-sensory interaction is spatially selective at a later (endogenous) processing stage.
Full Text Available The effect of hand proximity on vision and visual attention has been well documented. In this study we tested whether such effect(s would also be present in the auditory modality. With hands placed either near or away from the audio sources, participants performed an auditory-spatial discrimination (Exp 1: left or right side, pitch discrimination (Exp 2: high, med, or low tone, and spatial-plus-pitch (Exp 3: left or right; high, med, or low discrimination task. In Exp 1, when hands were away from the audio source, participants consistently responded faster with their right hand regardless of stimulus location. This right hand advantage, however, disappeared in the hands-near condition because of a significant improvement in left hand’s reaction time. No effect of hand proximity was found in Exp 2 or 3, where a choice reaction time task requiring pitch discrimination was used. Together, these results suggest that the effect of hand proximity is not exclusive to vision alone, but is also present in audition, though in a much weaker form. Most important, these findings provide evidence from auditory attention that supports the multimodal account originally raised by Reed et al. in 2006.
Johnstone, Patti M; Yeager, Kelly R; Noss, Emily
The neural dys-synchrony associated with auditory neuropathy spectrum disorder (ANSD) causes a temporal impairment that could degrade spatial hearing, particularly sound localization accuracy (SLA) and spatial release from masking (SRM). Unilateral cochlear implantation has become an accepted treatment for ANSD but treatment options for the contralateral ear remain controversial. We report spatial hearing measures in a child with ANSD before and after receiving a second cochlear implant (CI). An 11-year-7-month old boy with ANSD and expressive and receptive language delay received a second CI eight years after his first implant. The SLA and SRM were measured four months before sequential bilateral CIs (with the contralateral ear plugged and unplugged), and after nine months using both CIs. Testing done before the second CI, with the first CI alone, suggested that residual hearing in the contralateral ear contributed to sound localization accuracy, but not word recognition in quiet or noise. Nine-months after receiving a second CI, SLA improved by 12.76° and SRM increased to 3.8-4.2 dB relative to pre-operative performance. Results were compared to published outcomes for children with bilateral CIs. The addition of a second CI in this child with ANSD improved spatial hearing.
Constance May Bainbridge
Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual
Strauß, Johannes; Lehmann, Gerlind U C; Lehmann, Arne W; Lakes-Harlan, Reinhard
The auditory sense organ of Tettigoniidae (Insecta, Orthoptera) is located in the foreleg tibia and consists of scolopidial sensilla which form a row termed crista acustica. The crista acustica is associated with the tympana and the auditory trachea. This ear is a highly ordered, tonotopic sensory system. As the neuroanatomy of the crista acustica has been documented for several species, the most distal somata and dendrites of receptor neurons have occasionally been described as forming an alternating or double row. We investigate the spatial arrangement of receptor cell bodies and dendrites by retrograde tracing with cobalt chloride solution. In six tettigoniid species studied, distal receptor neurons are consistently arranged in double-rows of somata rather than a linear sequence. This arrangement of neurons is shown to affect 30-50% of the overall auditory receptors. No strict correlation of somata positions between the anterio-posterior and dorso-ventral axis was evident within the distal crista acustica. Dendrites of distal receptors occasionally also occur in a double row or are even massed without clear order. Thus, a substantial part of auditory receptors can deviate from a strictly straight organization into a more complex morphology. The linear organization of dendrites is not a morphological criterion that allows hearing organs to be distinguished from nonhearing sense organs serially homologous to ears in all species. Both the crowded arrangement of receptor somata and dendrites may result from functional constraints relating to frequency discrimination, or from developmental constraints of auditory morphogenesis in postembryonic development. Copyright © 2012 Wiley Periodicals, Inc.
Cappagli, Giulia; Cocchi, Elena; Gori, Monica
It is not clear what role visual information plays in the development of space perception. It has previously been shown that in absence of vision, both the ability to judge orientation in the haptic modality and bisect intervals in the auditory modality are severely compromised (Gori, Sandini, Martinoli & Burr, 2010; Gori, Sandini, Martinoli & Burr, 2014). Here we report for the first time also a strong deficit in proprioceptive reproduction and audio distance evaluation in early blind children and adults. Interestingly, the deficit is not present in a small group of adults with acquired visual disability. Our results support the idea that in absence of vision the audio and proprioceptive spatial representations may be delayed or drastically weakened due to the lack of visual calibration over the auditory and haptic modalities during the critical period of development. © 2015 John Wiley & Sons Ltd.
Erdener, Doğu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception - lip-reading and visual influence in auditory-visual integration; (ii) the development of auditory speech perception and native language perceptual attunement; and (iii) the relationship between these and a language skill relevant at this age, receptive vocabulary. Visual speech perception skills improved even over this relatively short time period. However, regression analyses revealed that vocabulary was predicted by auditory-only speech perception, and native language attunement, but not by visual speech perception ability. The results suggest that, in contrast to infants and schoolchildren, in three- to four-year-olds the relationship between speech perception and language ability is based on auditory and not visual or auditory-visual speech perception ability. Adding these results to existing findings allows elaboration of a more complete account of the developmental course of auditory-visual speech perception.
Pienkowski, Martin; Eggermont, Jos J
The effects of nonlinear interactions between different sound frequencies on the responses of neurons in primary auditory cortex (AI) have only been investigated using two-tone paradigms. Here we stimulated with relatively dense, Poisson-distributed trains of tone pips (with frequency ranges spanning five octaves, 16 frequencies /octave, and mean rates of 20 or 120 pips /s), and examined within-frequency (or auto-frequency) and cross-frequency interactions in three types of AI unit responses by computing second-order "Poisson-Wiener" auto- and cross-kernels. Units were classified on the basis of their spectrotemporal receptive fields (STRFs) as "double-peaked", "single-peaked" or "peak-valley". Second-order interactions were investigated between the two bands of excitatory frequencies on double-peaked STRFs, between an excitatory band and various non-excitatory bands on single-peaked STRFs, and between an excitatory band and an inhibitory sideband on peak-valley STRFs. We found that auto-frequency interactions (i.e., those within a single excitatory band) were always characterized by a strong depression of (first-order) excitation that decayed with the interstimulus lag up to approximately 200 ms. That depression was weaker in cross-frequency compared to auto-frequency interactions for approximately 25% of dual-peaked STRFs, evidence of "combination sensitivity" for the two bands. Non-excitatory and inhibitory frequencies (on single-peaked and peak-valley STRFs, respectively) typically weakly depressed the excitatory response at short interstimulus lags (interactions with inhibitory frequencies rather than just non-excitatory ones. Finally, facilitation in single-peaked and peak-valley units decreased with increasing stimulus density. Our results indicate that the strong combination sensitivity and cross-frequency facilitation suggested by previous two-tone-paradigm studies are much less pronounced when using more temporally-dense stimuli.
Fostick, Leah; Babkoff, Harvey
Some researchers suggested that one central mechanism is responsible for temporal order judgments (TOJ), within and across sensory channels. This suggestion is supported by findings of similar TOJ thresholds in same modality and cross-modality TOJ tasks. In the present study, we challenge this idea by analyzing and comparing the threshold distributions of the spectral and spatial TOJ tasks. In spectral TOJ, the tones differ in their frequency ("high" and "low") and are delivered either binaurally or monaurally. In spatial (or dichotic) TOJ, the two tones are identical but are presented asynchronously to the two ears and thus differ with respect to which ear received the first tone and which ear received the second tone ("left"/"left"). Although both tasks are regarded as measures of auditory temporal processing, a review of data published in the literature suggests that they trigger different patterns of response. The aim of the current study was to systematically examine spectral and spatial TOJ threshold distributions across a large number of studies. Data are based on 388 participants in 13 spectral TOJ experiments, and 222 participants in 9 spatial TOJ experiments. None of the spatial TOJ distributions deviated significantly from the Gaussian; while all of the spectral TOJ threshold distributions were skewed to the right, with more than half of the participants accurately judging temporal order at very short interstimulus intervals (ISI). The data do not support the hypothesis that 1 central mechanism is responsible for all temporal order judgments. We suggest that different perceptual strategies are employed when performing spectral TOJ than when performing spatial TOJ. We posit that the spectral TOJ paradigm may provide the opportunity for two-tone masking or temporal integration, which is sensitive to the order of the tones and thus provides perceptual cues that may be used to judge temporal order. This possibility should be considered when interpreting
Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize. PMID:28694770
Gandemer, Lennie; Parseihian, Gaetan; Kronland-Martinet, Richard; Bourdin, Christophe
It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the "spatial hearing map" theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.
Full Text Available It has long been suggested that sound plays a role in the postural control process. Few studies however have explored sound and posture interactions. The present paper focuses on the specific impact of audition on posture, seeking to determine the attributes of sound that may be useful for postural purposes. We investigated the postural sway of young, healthy blindfolded subjects in two experiments involving different static auditory environments. In the first experiment, we compared effect on sway in a simple environment built from three static sound sources in two different rooms: a normal vs. an anechoic room. In the second experiment, the same auditory environment was enriched in various ways, including the ambisonics synthesis of a immersive environment, and subjects stood on two different surfaces: a foam vs. a normal surface. The results of both experiments suggest that the spatial cues provided by sound can be used to improve postural stability. The richer the auditory environment, the better this stabilization. We interpret these results by invoking the “spatial hearing map” theory: listeners build their own mental representation of their surrounding environment, which provides them with spatial landmarks that help them to better stabilize.
Araneda, Rodrigo; De Volder, Anne G; Deggouj, Naïma; Philippot, Pierre; Heeren, Alexandre; Lacroix, Emilie; Decat, Monique; Rombaux, Philippe; Renier, Laurent
Tinnitus is the perception of a sound in the absence of external stimulus. Currently, the pathophysiology of tinnitus is not fully understood, but recent studies indicate that alterations in the brain involve non-auditory areas, including the prefrontal cortex. Here, we hypothesize that these brain alterations affect top-down cognitive control mechanisms that play a role in the regulation of sensations, emotions and attention resources. The efficiency of the executive control as well as simple reaction speed and processing speed were evaluated in tinnitus participants (TP) and matched control subjects (CS) in both the auditory and the visual modalities using a spatial Stroop paradigm. TP were slower and less accurate than CS during both the auditory and the visual spatial Stroop tasks, while simple reaction speed and stimulus processing speed were affected in TP in the auditory modality only. Tinnitus is associated both with modality-specific deficits along the auditory processing system and an impairment of cognitive control mechanisms that are involved both in vision and audition (i.e. that are supra-modal). We postulate that this deficit in the top-down cognitive control is a key-factor in the development and maintenance of tinnitus and may also explain some of the cognitive difficulties reported by tinnitus sufferers.
Rinne, Teemu; Koistinen, Sonja; Talja, Suvi; Wikman, Patrik; Salonen, Oili
In the present study, we applied high-resolution functional magnetic resonance imaging (fMRI) of the human auditory cortex (AC) and adjacent areas to compare activations during spatial discrimination and spatial n-back memory tasks that were varied parametrically in difficulty. We found that activations in the anterior superior temporal gyrus (STG) were stronger during spatial discrimination than during spatial memory, while spatial memory was associated with stronger activations in the inferior parietal lobule (IPL). We also found that wide AC areas were strongly deactivated during the spatial memory tasks. The present AC activation patterns associated with spatial discrimination and spatial memory tasks were highly similar to those obtained in our previous study comparing AC activations during pitch discrimination and pitch memory (Rinne et al., 2009). Together our previous and present results indicate that discrimination and memory tasks activate anterior and posterior AC areas differently and that this anterior-posterior division is present both when these tasks are performed on spatially invariant (pitch discrimination vs. memory) or spatially varying (spatial discrimination vs. memory) sounds. These results also further strengthen the view that activations of human AC cannot be explained only by stimulus-level parameters (e.g., spatial vs. nonspatial stimuli) but that the activations observed with fMRI are strongly dependent on the characteristics of the behavioral task. Thus, our results suggest that in order to understand the functional structure of AC a more systematic investigation of task-related factors affecting AC activations is needed. Copyright © 2011 Elsevier Inc. All rights reserved.
Full Text Available The modulation of brain activity as a function of auditory location was investigated using electro-encephalography in combination with standardized low-resolution brain electromagnetic tomography. Auditory stimuli were presented at various positions under anechoic conditions in free-field space, thus providing the complete set of natural spatial cues. Variation of electrical activity in cortical areas depending on sound location was analyzed by contrasts between sound locations at the time of the N1 and P2 responses of the auditory evoked potential. A clear-cut double dissociation with respect to the cortical locations and the points in time was found, indicating spatial processing (1 in the primary auditory cortex and posterodorsal auditory cortical pathway at the time of the N1, and (2 in the anteroventral pathway regions about 100 ms later at the time of the P2. Thus, it seems as if both auditory pathways are involved in spatial analysis but at different points in time. It is possible that the late processing in the anteroventral auditory network reflected the sharing of this region by analysis of object-feature information and spectral localization cues or even the integration of spatial and non-spatial sound features.
Goldsworthy, Raymond L
This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI) users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs) were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60=0, 270, and 540 ms), number of noise sources (1, 4, and 11), and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm). Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources. © The Author(s) 2014.
Raymond L. Goldsworthy
Full Text Available This study evaluates a spatial-filtering algorithm as a method to improve speech reception for cochlear-implant (CI users in reverberant environments with multiple noise sources. The algorithm was designed to filter sounds using phase differences between two microphones situated 1 cm apart in a behind-the-ear hearing-aid capsule. Speech reception thresholds (SRTs were measured using a Coordinate Response Measure for six CI users in 27 listening conditions including each combination of reverberation level (T60 = 0, 270, and 540 ms, number of noise sources (1, 4, and 11, and signal-processing algorithm (omnidirectional response, dipole-directional response, and spatial-filtering algorithm. Noise sources were time-reversed speech segments randomly drawn from the Institute of Electrical and Electronics Engineers sentence recordings. Target speech and noise sources were processed using a room simulation method allowing precise control over reverberation times and sound-source locations. The spatial-filtering algorithm was found to provide improvements in SRTs on the order of 6.5 to 11.0 dB across listening conditions compared with the omnidirectional response. This result indicates that such phase-based spatial filtering can improve speech reception for CI users even in highly reverberant conditions with multiple noise sources.
Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien
features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested...... whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings...
Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.
Bissmeyer, Susan R S; Goldsworthy, Raymond L
Hearing loss greatly reduces an individual's ability to comprehend speech in the presence of background noise. Over the past decades, numerous signal-processing algorithms have been developed to improve speech reception in these situations for cochlear implant and hearing aid users. One challenge is to reduce background noise while not introducing interaural distortion that would degrade binaural hearing. The present study evaluates a noise reduction algorithm, referred to as binaural Fennec, that was designed to improve speech reception in background noise while preserving binaural cues. Speech reception thresholds were measured for normal-hearing listeners in a simulated environment with target speech generated in front of the listener and background noise originating 90° to the right of the listener. Lateralization thresholds were also measured in the presence of background noise. These measures were conducted in anechoic and reverberant environments. Results indicate that the algorithm improved speech reception thresholds, even in highly reverberant environments. Results indicate that the algorithm also improved lateralization thresholds for the anechoic environment while not affecting lateralization thresholds for the reverberant environments. These results provide clear evidence that this algorithm can improve speech reception in background noise while preserving binaural cues used to lateralize sound.
Full Text Available The spatial modulation effect has been reported in somatosensory spatial judgments when the task-irrelevant auditory stimuli are given from the opposite direction. Two experiments examined how the spatial modulation effect on somatosensory spatial judgments is altered in various body regions and their spatial locations. In experiment 1, air-puffs were presented randomly to either the left or right cheeks, hands (palm versus back and knees while auditory stimuli were presented from just behind ear on either the same or opposite side. In experiment 2, air-puffs were presented to hands which were aside of cheeks or placed on the knees. The participants were instructed to make speeded discrimination responses regarding the side (left versus right of the somatosensory targets by using two footpedals. In all conditions, reaction times significantry increased when the irrelevant stimuli were presented from the opposite side rather than from the same side. We found that the back of the hands were more influenced by incongruent auditory stimuli than cheeks, knees and palms, and that the hands were more influenced by incongruent auditory stimuli when placed at the side of cheeks than on the knees. These results indicate that the auditory-somatosensory interaction differs in various body regions and their spatial locations.
musculus. The Journal of the Acoustical Society of America 50:1193-1198. Finneran JJ, Houser DS, Mase -Guthrie B, Ewing RY, Lingenfelser RG. 2009. Auditory... determined , the ear fat is pressed against an area of the tympano-periotic complex including the ventral portion of the glove finger. At the entry to...decrease the sound speeds through these tissues, enhancing the waveguide effect that was discussed above. Future studies should aim to determine the
Van der Burg, Erik; Olivers, Christian N. L.; Bronkhorst, Adelbert W.; Theeuwes, Jan
Searching for an object within a cluttered, continuously changing environment can be a very time-consuming process. The authors show that a simple auditory pip drastically decreases search times for a synchronized visual object that is normally very difficult to find. This effect occurs even though the pip contains no information on the location…
Full Text Available Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks (minimum audible angle and space bisection and two acoustically different environments (normal room and anechoic room. With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the minimum audible angle after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
Tonelli, Alessia; Brayda, Luca; Gori, Monica
Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
McMullen, Kyla A.
Although the concept of virtual spatial audio has existed for almost twenty-five years, only in the past fifteen years has modern computing technology enabled the real-time processing needed to deliver high-precision spatial audio. Furthermore, the concept of virtually walking through an auditory environment did not exist. The applications of such an interface have numerous potential uses. Spatial audio has the potential to be used in various manners ranging from enhancing sounds delivered in virtual gaming worlds to conveying spatial locations in real-time emergency response systems. To incorporate this technology in real-world systems, various concerns should be addressed. First, to widely incorporate spatial audio into real-world systems, head-related transfer functions (HRTFs) must be inexpensively created for each user. The present study further investigated an HRTF subjective selection procedure previously developed within our research group. Users discriminated auditory cues to subjectively select their preferred HRTF from a publicly available database. Next, the issue of training to find virtual sources was addressed. Listeners participated in a localization training experiment using their selected HRTFs. The training procedure was created from the characterization of successful search strategies in prior auditory search experiments. Search accuracy significantly improved after listeners performed the training procedure. Next, in the investigation of auditory spatial memory, listeners completed three search and recall tasks with differing recall methods. Recall accuracy significantly decreased in tasks that required the storage of sound source configurations in memory. To assess the impacts of practical scenarios, the present work assessed the performance effects of: signal uncertainty, visual augmentation, and different attenuation modeling. Fortunately, source uncertainty did not affect listeners' ability to recall or identify sound sources. The present
Maybery, Murray T.; Clissa, Peter J.; Parmentier, Fabrice B. R.; Leung, Doris; Harsa, Grefin; Fox, Allison M.; Jones, Dylan M.
The present study investigated the binding of verbal identity and spatial location in the retention of sequences of spatially distributed acoustic stimuli. Study stimuli varying in verbal content and spatial location (e.g. V[subscript 1]S[subscript 1], V[subscript 2]S[subscript 2], V[subscript 3]S[subscript 3], V[subscript 4]S[subscript 4]) were…
Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D
Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.
Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo
We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.
Lodhia, Veema; Hautus, Michael J; Johnson, Blake W; Brock, Jon
The auditory processing atypicalities experienced by many individuals on the autism spectrum disorder might be understood in terms of difficulties parsing the sound energy arriving at the ears into discrete auditory 'objects'. Here, we asked whether autistic adults are able to make use of two important spatial cues to auditory object formation - the relative timing and amplitude of sound energy at the left and right ears. Using electroencephalography, we measured the brain responses of 15 autistic adults and 15 age- and verbal-IQ-matched control participants as they listened to dichotic pitch stimuli - white noise stimuli in which interaural timing or amplitude differences applied to a narrow frequency band of noise typically lead to the perception of a pitch sound that is spatially segregated from the noise. Responses were contrasted with those to stimuli in which timing and amplitude cues were removed. Consistent with our previous studies, autistic adults failed to show a significant object-related negativity (ORN) for timing-based pitch, although their ORN was not significantly smaller than that of the control group. Autistic participants did show an ORN to amplitude cues, indicating that they do not experience a general impairment in auditory object formation. However, their P400 response - thought to indicate the later attention-dependent aspects of auditory object formation - was missing. These findings provide further evidence of atypical auditory object processing in autism with potential implications for understanding the perceptual and communication difficulties associated with the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Scott A Stone
Full Text Available Many salient visual events tend to coincide with auditory events, such as seeing and hearing a car pass by. Information from the visual and auditory senses can be used to create a stable percept of the stimulus. Having access to related coincident visual and auditory information can help for spatial tasks such as localization. However not all visual information has analogous auditory percepts, such as viewing a computer monitor. Here, we describe a system capable of detecting and augmenting visual salient events into localizable auditory events. The system uses a neuromorphic camera (DAVIS 240B to detect logarithmic changes of brightness intensity in the scene, which can be interpreted as salient visual events. Participants were blindfolded and asked to use the device to detect new objects in the scene, as well as determine direction of motion for a moving visual object. Results suggest the system is robust enough to allow for the simple detection of new salient stimuli, as well accurately encoding direction of visual motion. Future successes are probable as neuromorphic devices are likely to become faster and smaller in the future, making this system much more feasible.
Zimmermann, Jacqueline F.; Moscovitch, Morris; Alain, Claude
Long-term memory (LTM) has been shown to bias attention to a previously learned visual target location. Here, we examined whether memory-predicted spatial location can facilitate the detection of a faint pure tone target embedded in real world audio clips (e.g., soundtrack of a restaurant). During an initial familiarization task, participants…
Gherri, Elena; Driver, Jon; Eimer, Martin
To investigate whether saccade preparation can modulate processing of auditory stimuli in a spatially-specific fashion, ERPs were recorded for a Saccade task, in which the direction of a prepared saccade was cued, prior to an imperative auditory stimulus indicating whether to execute or withhold that saccade. For comparison, we also ran a conventional Covert Attention task, where the same cue now indicated the direction for a covert endogenous attentional shift prior to an auditory target-nontarget discrimination. Lateralised components previously observed during cued shifts of attention (ADAN, LDAP) did not differ significantly across tasks, indicating commonalities between auditory spatial attention and oculomotor control. Moreover, in both tasks, spatially-specific modulation of auditory processing was subsequently found, with enhanced negativity for lateral auditory nontarget stimuli at cued versus uncued locations. This modulation started earlier and was more pronounced for the Covert Attention task, but was also reliably present in the Saccade task, demonstrating that the effects of covert saccade preparation on auditory processing can be similar to effects of endogenous covert attentional orienting, albeit smaller. These findings provide new evidence for similarities but also some differences between oculomotor preparation and shifts of endogenous spatial attention. They also show that saccade preparation can affect not just vision, but also sensory processing of auditory events.
Ding, Hao; Qin, Wen; Liang, Meng; Ming, Dong; Wan, Baikun; Li, Qiang; Yu, Chunshui
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger
Lvov, A. V.; Metelev, S. L.
We propose simulation models for estimating the interference immunity of radio reception using the spatial processing of signals in the airborne and ground-based communication channels of the meter and decimeter wavelength ranges. The ultimate achievable interference immunity under various radio-wave propagation conditions is studied.
Chillemi, Gaetana; Calamuneri, Alessandro; Morgante, Francesca; Terranova, Carmen; Rizzo, Vincenzo; Girlanda, Paolo; Ghilardi, Maria Felice; Quartarone, Angelo
Investigation of spatial and temporal cognitive processing in idiopathic cervical dystonia (CD) by means of specific tasks based on perception in time and space domains of visual and auditory stimuli. Previous psychophysiological studies have investigated temporal and spatial characteristics of neural processing of sensory stimuli (mainly somatosensorial and visual), whereas the definition of such processing at higher cognitive level has not been sufficiently addressed. The impairment of time and space processing is likely driven by basal ganglia dysfunction. However, other cortical and subcortical areas, including cerebellum, may also be involved. We tested 21 subjects with CD and 22 age-matched healthy controls with 4 recognition tasks exploring visuo-spatial, audio-spatial, visuo-temporal, and audio-temporal processing. Dystonic subjects were subdivided in three groups according to the head movement pattern type (lateral: Laterocollis, rotation: Torticollis) as well as the presence of tremor (Tremor). We found significant alteration of spatial processing in Laterocollis subgroup compared to controls, whereas impairment of temporal processing was observed in Torticollis subgroup compared to controls. Our results suggest that dystonia is associated with a dysfunction of temporal and spatial processing for visual and auditory stimuli that could underlie the well-known abnormalities in sequence learning. Moreover, we suggest that different movement pattern type might lead to different dysfunctions at cognitive level within dystonic population.
Understanding speech in complex acoustic environments presents a challenge for most hearing-impaired listeners. In conditions where normal-hearing listeners effortlessly utilize spatial cues to improve speech intelligibility, hearing-impaired listeners often struggle. In this thesis, the influence...... with an intelligibility-weighted “efficiency factor” which revealed that the spectral characteristics of the ER’s caused the reduced benefit. Hearing-impaired listeners were able to utilize the ER energy as effectively as normal-hearing listeners, most likely because binaural processing was not required...... that are binaurally linked can utilize the signals at both ears and preserve the ILD’s through co-ordinated compression. Hearing-impaired listeners received a small, but not significant advantage from linked compared to independent compression. It was concluded that, for speech intelligibility, the exact ILD...
Spitzer, Matthew W.; Bala, Avinash D. S.; Takahashi, Terry T.
In humans, directional hearing in reverberant conditions is characterized by a ``precedence effect,'' whereby directional information conveyed by leading sounds dominates perceived location, and listeners are relatively insensitive to directional information conveyed by lagging sounds. Behavioral studies provide evidence of precedence phenomena in a wide range of species. The present study employs a discrimination paradigm, based on habituation and recovery of the pupillary dilation response, to provide quantitative measures of precedence phenomena in the barn owl. As in humans, the owl's ability to discriminate changes in the location of lagging sources is impaired relative to that for single sources. Spatial discrimination of lead sources is also impaired, but to a lesser extent than discrimination of lagging sources. Results of a control experiment indicate that sensitivity to monaural cues cannot account for discrimination of lag source location. Thus, impairment of discrimination ability in the two-source conditions most likely reflects a reduction in sensitivity to binaural directional information. These results demonstrate a similarity of precedence effect phenomena in barn owls and humans, and provide a basis for quantitative comparison with neuronal data from the same species.
Localizing and selectively attending to the source of a sound of interest in a complex auditory environment is an important capacity of the human auditory system. The underlying neural mechanisms have, however, still not been clarified in detail. This issue was addressed by using bilateral bipolar-balanced transcranial direct current stimulation (tDCS) in combination with a task demanding free-field sound localization in the presence of multiple sound sources, thus providing a realistic simulation of the so-called "cocktail-party" situation. With left-anode/right-cathode, but not with right-anode/left-cathode, montage of bilateral electrodes, tDCS over superior temporal gyrus, including planum temporale and auditory cortices, was found to improve the accuracy of target localization in left hemispace. No effects were found for tDCS over inferior parietal lobule or with off-target active stimulation over somatosensory-motor cortex that was used to control for non-specific effects. Also, the absolute error in localization remained unaffected by tDCS, thus suggesting that general response precision was not modulated by brain polarization. This finding can be explained in the framework of a model assuming that brain polarization modulated the suppression of irrelevant sound sources, thus resulting in more effective spatial separation of the target from the interfering sound in the complex auditory scene. Copyright © 2016 Elsevier Ltd. All rights reserved.
Vishakha W Rawool
Full Text Available Context: The ability to detect important auditory signals while performing visual tasks may be further compounded by background chatter. Thus, it is important to know how task performance may interact with background chatter to hinder signal detection. Aim: To examine any interactive effects of speech spectrum noise and task performance on the ability to detect signals. Settings and Design: The setting was a sound-treated booth. A repeated measures design was used. Materials and Methods: Auditory thresholds of 20 normal adults were determined at 0.5, 1, 2 and 4 kHz in the following conditions presented in a random order: (1 quiet with attention; (2 quiet with a visuo-spatial task or puzzle (distraction; (3 noise with attention and (4 noise with task. Statistical Analysis: Multivariate analyses of variance (MANOVA with three repeated factors (quiet versus noise, visuo-spatial task versus no task, signal frequency. Results: MANOVA revealed significant main effects for noise and signal frequency and significant noise–frequency and task–frequency interactions. Distraction caused by performing the task worsened the thresholds for tones presented at the beginning of the experiment and had no effect on tones presented in the middle. At the end of the experiment, thresholds (4 kHz were better while performing the task than those obtained without performing the task. These effects were similar across the quiet and noise conditions. Conclusion: Detection of auditory signals is difficult at the beginning of a distracting visuo-spatial task but over time, task learning and auditory training effects can nullify the effect of distraction and may improve detection of high frequency sounds.
Fuhrman, Susan I; Redfern, Mark S; Jennings, J Richard; Furman, Joseph M
This study investigated whether spatial aspects of an information processing task influence dual-task interference. Two groups (Older/Young) of healthy adults participated in dual-task experiments. Two auditory information processing tasks included a frequency discrimination choice reaction time task (non-spatial task) and a lateralization choice reaction time task (spatial task). Postural tasks included combinations of standing with eyes open or eyes closed on either a fixed floor or a sway-referenced floor. Reaction times and postural sway via center of pressure were recorded. Baseline measures of reaction time and sway were subtracted from the corresponding dual-task results to calculate reaction time task costs and postural task costs. Reaction time task cost increased with eye closure (p = 0.01), sway-referenced flooring (p < 0.0001), and the spatial task (p = 0.04). Additionally, a significant (p = 0.05) task x vision x age interaction indicated that older subjects had a significant vision X task interaction whereas young subjects did not. However, when analyzed by age group, the young group showed minimal differences in interference for the spatial and non-spatial tasks with eyes open, but showed increased interference on the spatial relative to non-spatial task with eyes closed. On the contrary, older subjects demonstrated increased interference on the spatial relative to the non-spatial task with eyes open, but not with eyes closed. These findings suggest that visual-spatial interference may occur in older subjects when vision is used to maintain posture.
Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf
To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.
Avinash D S Bala
Full Text Available The relationship between neuronal acuity and behavioral performance was assessed in the barn owl (Tyto alba, a nocturnal raptor renowned for its ability to localize sounds and for the topographic representation of auditory space found in the midbrain. We measured discrimination of sound-source separation using a newly developed procedure involving the habituation and recovery of the pupillary dilation response. The smallest discriminable change of source location was found to be about two times finer in azimuth than in elevation. Recordings from neurons in its midbrain space map revealed that their spatial tuning, like the spatial discrimination behavior, was also better in azimuth than in elevation by a factor of about two. Because the PDR behavioral assay is mediated by the same circuitry whether discrimination is assessed in azimuth or in elevation, this difference in vertical and horizontal acuity is likely to reflect a true difference in sensory resolution, without additional confounding effects of differences in motor performance in the two dimensions. Our results, therefore, are consistent with the hypothesis that the acuity of the midbrain space map determines auditory spatial discrimination.
Michael J Boivin
Full Text Available BACKGROUND: Using the Kaufman Assessment Battery for Children (K-ABC Conant et al. (1999 observed that visual and auditory working memory (WM span were independent in both younger and older children from DR Congo, but related in older American children and in Lao children. The present study evaluated whether visual and auditory WM span were independent in Ugandan and Senegalese children. METHOD: In a linear regression analysis we used visual (Spatial Memory, Hand Movements and auditory (Number Recall WM along with education and physical development (weight/height as predictors. The predicted variable in this analysis was Word Order, which is a verbal memory task that has both visual and auditory memory components. RESULTS: Both the younger (8.5 yrs Ugandan children had auditory memory span (Number Recall that was strongly predictive of Word Order performance. For both the younger and older groups of Senegalese children, only visual WM span (Spatial Memory was strongly predictive of Word Order. Number Recall was not significantly predictive of Word Order in either age group. CONCLUSIONS: It is possible that greater literacy from more schooling for the Ugandan age groups mediated their greater degree of interdependence between auditory and verbal WM. Our findings support those of Conant et al., who observed in their cross-cultural comparisons that stronger education seemed to enhance the dominance of the phonological-auditory processing loop for WM.
Dobreva, Marina S; O'Neill, William E; Paige, Gary D
A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to
This book argues that it is time to rethink reception as a traditional paradigm for understanding the relation between the ancient Greco-Roman traditions and early Judaism and Christianity. The concept of reception implies taking something from one fixed box into another, often a chronological la...
Neil M McLachlan
Full Text Available Music notations use both symbolic and spatial representation systems. Novice musicians do not have the training to associate symbolic information with musical identities, such as chords or rhythmic and melodic patterns. They provide an opportunity to explore the mechanisms underpinning multimodal learning when spatial encoding strategies of feature dimensions might be expected to dominate. In this study, we applied a range of transformations (such as time reversal to short melodies and rhythms and asked novice musicians to identify them with or without the aid of notation. Performance using a purely spatial (graphic notation was contrasted with the more symbolic, traditional western notation over a series of weekly sessions. The results showed learning effects for both notation types, but performance improved more for graphic notation. This points to greater compatibility of auditory and visual neural codes for novice musicians when using spatial notation, suggesting that pitch and time may be spatially encoded in multimodal associative memory. The findings also point to new strategies for training novice musicians.
Singh, Gurjit; Pichora-Fuller, M Kathleen; Schneider, Bruce A
The effects of directing, switching, and misdirecting auditory spatial attention in a complex listening situation were investigated in 8 younger and 8 older listeners with normal-hearing sensitivity below 4 kHz. In two companion experiments, a target sentence was presented from one spatial location and two competing sentences were presented simultaneously, one from each of two different locations. Pretrial, listeners were informed of the call-sign cue that identified which of the three sentences was the target and of the probability of the target sentence being presented from each of the three possible locations. Four different probability conditions varied in the likelihood of the target being presented at the left, center, and right locations. In Experiment 1, four timing conditions were tested: the original (unedited) sentences (which contained about 300 msec of filler speech between the call-sign cue and the onset of the target words), or modified (edited) sentences with silent pauses of 0, 150, or 300 msec replacing the filler speech. In Experiment 2, when the cued sentence was presented from an unlikely (side) listening location, for half of the trials the listener's task was to report target words from the cued sentence (cue condition); for the remaining trials, the listener's task was to report target words from the sentence presented from the opposite, unlikely (side) listening location (anticue condition). In Experiment 1, for targets presented from the likely (center) location, word identification was better for the unedited than for modified sentences. For targets presented from unlikely (side) locations, word identification was better when there was more time between the call-sign cue and target words. All listeners benefited similarly from the availability of more compared with less time and the presentation of continuous compared with interrupted speech. In Experiment 2, the key finding was that age-related performance deficits were observed in
Full Text Available The aims of the present study were to investigate the ability of hearing-impaired (HI individuals with different binaural hearing conditions to discriminate spatial auditory-sources at the midline and lateral positions, and to explore the possible central processing mechanisms by measuring the minimal audible angle (MAA and mismatch negativity (MMN response. To measure MAA at the left/right 0°, 45° and 90° positions, 12 normal-hearing (NH participants and 36 patients with sensorineural hearing loss, which included 12 patients with symmetrical hearing loss (SHL and 24 patients with asymmetrical hearing loss (AHL [12 with unilateral hearing loss on the left (UHLL and 12 with unilateral hearing loss on the right (UHLR] were recruited. In addition, 128-electrode electroencephalography was used to record the MMN response in a separate group of 60 patients (20 UHLL, 20 UHLR and 20 SHL patients and 20 NH participants. The results showed MAA thresholds of the NH participants to be significantly lower than the HI participants. Also, a significantly smaller MAA threshold was obtained at the midline position than at the lateral position in both NH and SHL groups. However, in the AHL group, MAA threshold for the 90° position on the affected side was significantly smaller than the MMA thresholds obtained at other positions. Significantly reduced amplitudes and prolonged latencies of the MMN were found in the HI groups compared to the NH group. In addition, contralateral activation was found in the UHL group for sounds emanating from the 90° position on the affected side and in the NH group. These findings suggest that the abilities of spatial discrimination at the midline and lateral positions vary significantly in different hearing conditions. A reduced MMN amplitude and prolonged latency together with bilaterally symmetrical cortical activations over the auditory hemispheres indicate possible cortical compensatory changes associated with poor
Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan
Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.
Soskey, Laura N; Allen, Paul D; Bennetto, Loisa
One of the earliest observable impairments in autism spectrum disorder (ASD) is a failure to orient to speech and other social stimuli. Auditory spatial attention, a key component of orienting to sounds in the environment, has been shown to be impaired in adults with ASD. Additionally, specific deficits in orienting to social sounds could be related to increased acoustic complexity of speech. We aimed to characterize auditory spatial attention in children with ASD and neurotypical controls, and to determine the effect of auditory stimulus complexity on spatial attention. In a spatial attention task, target and distractor sounds were played randomly in rapid succession from speakers in a free-field array. Participants attended to a central or peripheral location, and were instructed to respond to target sounds at the attended location while ignoring nearby sounds. Stimulus-specific blocks evaluated spatial attention for simple non-speech tones, speech sounds (vowels), and complex non-speech sounds matched to vowels on key acoustic properties. Children with ASD had significantly more diffuse auditory spatial attention than neurotypical children when attending front, indicated by increased responding to sounds at adjacent non-target locations. No significant differences in spatial attention emerged based on stimulus complexity. Additionally, in the ASD group, more diffuse spatial attention was associated with more severe ASD symptoms but not with general inattention symptoms. Spatial attention deficits have important implications for understanding social orienting deficits and atypical attentional processes that contribute to core deficits of ASD. Autism Res 2017, 10: 1405-1416. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Full Text Available Auditory perceptual and visual-spatial characteristics of subjective tinnitus evoked by eye gaze were studied in two adult human subjects. This uncommon form of tinnitus occurred approximately 4-6 weeks following neurosurgery for gross total excision of space Occupying lesions of the cerebellopontine angle and hearing was lost in the operated ear. In both cases, the gaze evoked tinnitus was characterized as being tonal in nature, with pitch and loudness percepts remaining constant as long as the same horizontal or vertical eye directions were maintained. Tinnitus was absent when the eyes were in a neutral head referenced position with subjects looking straight ahead. The results and implications of ophthalmological, standard and modified visual field assessment, pure tone audio metric assessment, spontaneous otoacoustic emission testing and detailed psychophysical assessment of pitch and loudness are discussed
Several attributes of auditory spatial imagery associated with stereophonic sound reproduction are strongly modulated by variation in interaural cross correlation (IACC) within low frequency bands. Nonetheless, a standard practice in bass management for two-channel and multichannel loudspeaker reproduction is to mix low-frequency musical content to a single channel for reproduction via a single driver (e.g., a subwoofer). This paper reviews the results of psychoacoustic studies which support the conclusion that reproduction via multiple drivers of decorrelated low-frequency signals significantly affects such important spatial attributes as auditory source width (ASW), auditory source distance (ASD), and listener envelopment (LEV). A variety of methods have been employed in these tests, including forced choice discrimination and identification, and direct ratings of both global dissimilarity and distinct attributes. Contrary to assumptions that underlie industrial standards established in 1994 by ITU-R. Recommendation BS.775-1, these findings imply that substantial stereophonic spatial information exists within audio signals at frequencies below the 80 to 120 Hz range of prescribed subwoofer cutoff frequencies, and that loudspeaker reproduction of decorrelated signals at frequencies as low as 50 Hz can have an impact upon auditory spatial imagery. [Work supported by VRQ.
This paper reviews the literature and reports on the current state of knowledge regarding the potential for managers to use visual (VC), auditory (AC), and olfactory (OC) cues to manage foraging behavior and spatial distribution of rangeland livestock. We present evidence that free-ranging livestock...
Erdener, Dogu; Burnham, Denis
Despite the body of research on auditory-visual speech perception in infants and schoolchildren, development in the early childhood period remains relatively uncharted. In this study, English-speaking children between three and four years of age were investigated for: (i) the development of visual speech perception--lip-reading and visual…
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Sensory experiences have important roles in the functional development of the mammalian auditory cortex. Here, we show how early continuous noise rearing influences spatial sensitivity in the rat primary auditory cortex (A1) and its underlying mechanisms. By rearing infant rat pups under conditions of continuous, moderate level white noise, we found that noise rearing markedly attenuated the spatial sensitivity of A1 neurons. Compared with rats reared under normal conditions, spike counts of A1 neurons were more poorly modulated by changes in stimulus location, and their preferred locations were distributed over a larger area. We further show that early continuous noise rearing induced significant decreases in glutamic acid decarboxylase 65 and gamma-aminobutyric acid (GABA)(A) receptor alpha1 subunit expression, and an increase in GABA(A) receptor alpha3 expression, which indicates a returned to the juvenile form of GABA(A) receptor, with no effect on the expression of N-methyl-D-aspartate receptors. These observations indicate that noise rearing has powerful adverse effects on the maturation of cortical GABAergic inhibition, which might be responsible for the reduced spatial sensitivity.
Widmann, Andreas; Schröger, Erich
The presented study was designed to investigate ERP effects of auditory spatial attention in sustained attention condition (where the to-be-attended location is defined in a blockwise manner) and in a transient attention condition (where the to-be-attended location is defined in a trial-by-trial manner). Lateralization in the azimuth plane was manipulated (a) via monaural presentation of l- and right-ear sounds, (b) via interaural intensity differences, (c) via interaural time differences, (d) via an artificial-head recording, and (e) via free-field stimulation. Ten participants were delivered with frequent Nogo- and infrequent Go-Stimuli. In one half of the experiment participants were instructed to press a button if they detected a Go-stimulus at a predefined side (sustained attention), in the other half they were required to detect Go-stimuli following an arrow-cue at the cued side (transient attention). Results revealed negative differences (Nd) between ERPs elicited by to-be-attended and to-be-ignored sounds in all conditions. These Nd-effects were larger for the sustained than for the transient attention condition indicating that attentional selection according to spatial criteria is improved when subjects can focus to one and the same location for a series of stimuli.
Wolter, Sibylla; Dudschig, Carolin; de la Vega, Irmgard; Kaup, Barbara
This study investigated whether the spatial terms high and low, when used in sentence contexts implying a non-literal interpretation, trigger similar spatial associations as would have been expected from the literal meaning of the words. In three experiments, participants read sentences describing either a high or a low auditory event (e.g., The soprano sings a high aria vs. The pianist plays a low note). In all Experiments, participants were asked to judge (yes/no) whether the sentences were meaningful by means of up/down (Experiments 1 and 2) or left/right (Experiment 3) key press responses. Contrary to previous studies reporting that metaphorical language understanding differs from literal language understanding with regard to simulation effects, the results show compatibility effects between sentence implied pitch height and response location. The results are in line with grounded models of language comprehension proposing that sensory motor experiences are being elicited when processing literal as well as non-literal sentences. Copyright © 2014 Elsevier B.V. All rights reserved.
Full Text Available The results of the determination of the geostationary satellite «Eutelsat-13B» orbital position obtained during 2015-2016 years using European stations’ network for reception of DVB-S signals from the satellite are presented. The network consists of five stations located in Ukraine and Latvia. The stations are equipped with a radio engineering complex developed by the RI «MAO». The measured parameter is a time difference of arrival (TDOA of the DVB-S signals to the stations of the network. The errors of TDOA determination and satellite coordinates, obtained using a numerical model of satellite motion, are equal ±2.6m and ±35m respectively. Software implementation of the numerical model is taken from the free space dynamics library OREKIT.
Kaliuzny, M. P.; Bushuev, F. I.; Sibiriakova, Ye. S.; Shulga, O. V.; Shakun, L. S.; Bezrukovs, V.; Kulishenko, V. F.; Moskalenko, S. S.; Malynovsky, Ye. V.; Balagura, O. A.
The results of the determination of the geostationary satellite "Eutelsat-13B" orbital position obtained during 2015-2016 years using European stations' network for reception of DVB-S signals from the satellite are presented. The network consists of five stations located in Ukraine and Latvia. The stations are equipped with a radio engineering complex developed by the RI "MAO". The measured parameter is a time difference of arrival (TDOA) of the DVB-S signals to the stations of the network. The errors of TDOA determination and satellite coordinates, obtained using a numerical model of satellite motion, are equal ±2.6 m and ±35 m respectively. Software implementation of the numerical model is taken from the free space dynamics library OREKIT.
Neher, Tobias; Laugesen, Søren; Jensen, Niels Søgaard; Kragelund, Louise
This study aimed to clarify the basic auditory and cognitive processes that affect listeners' performance on two spatial listening tasks: sound localization and speech recognition in spatially complex, multi-talker situations. Twenty-three elderly listeners with mild-to-moderate sensorineural hearing impairments were tested on the two spatial listening tasks, a measure of monaural spectral ripple discrimination, a measure of binaural temporal fine structure (TFS) sensitivity, and two (visual) cognitive measures indexing working memory and attention. All auditory test stimuli were spectrally shaped to restore (partial) audibility for each listener on each listening task. Eight younger normal-hearing listeners served as a control group. Data analyses revealed that the chosen auditory and cognitive measures could predict neither sound localization accuracy nor speech recognition when the target and maskers were separated along the front-back dimension. When the competing talkers were separated along the left-right dimension, however, speech recognition performance was significantly correlated with the attentional measure. Furthermore, supplementary analyses indicated additional effects of binaural TFS sensitivity and average low-frequency hearing thresholds. Altogether, these results are in support of the notion that both bottom-up and top-down deficits are responsible for the impaired functioning of elderly hearing-impaired listeners in cocktail party-like situations. © 2011 Acoustical Society of America
Ruggles, Dorea; Shinn-Cunningham, Barbara
Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.
Rutkowski, Tomasz M.
The paper reviews nine robotic and virtual reality (VR) brain–computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI–lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realti...
by postulating that an object with a perceived location in space could have both visual and auditory properties. A connection was added between the...fundamental pitch improves the discrimination of simultaneous vowel sounds (surveyed by Darwin , 2008). As a simple way to incorporate this effect, we...simultaneous speakers. J.Acoust.Soc. Am. 110(3), 1101-1109. Darwin , C J. (2008). Listening to speech in the presence of other sounds. Philosophical
Christina M. Karns
Full Text Available Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults. Using a naturalistic dichotic listening paradigm, we characterized the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages.
Karns, Christina M.; Isbell, Elif; Giuliano, Ryan J.; Neville, Helen J.
Auditory selective attention is a critical skill for goal-directed behavior, especially where noisy distractions may impede focusing attention. To better understand the developmental trajectory of auditory spatial selective attention in an acoustically complex environment, in the current study we measured auditory event-related potentials (ERPs) in human children across five age groups: 3–5 years; 10 years; 13 years; 16 years; and young adults using a naturalistic dichotic listening paradigm, characterizing the ERP morphology for nonlinguistic and linguistic auditory probes embedded in attended and unattended stories. We documented robust maturational changes in auditory evoked potentials that were specific to the types of probes. Furthermore, we found a remarkable interplay between age and attention-modulation of auditory evoked potentials in terms of morphology and latency from the early years of childhood through young adulthood. The results are consistent with the view that attention can operate across age groups by modulating the amplitude of maturing auditory early-latency evoked potentials or by invoking later endogenous attention processes. Development of these processes is not uniform for probes with different acoustic properties within our acoustically dense speech-based dichotic listening task. In light of the developmental differences we demonstrate, researchers conducting future attention studies of children and adolescents should be wary of combining analyses across diverse ages. PMID:26002721
the presence of primacy and recency effects , resulting in a large number of errors in which listeners erroneously selected the loudspeaker that had...the sound source that produced this sound. As in the previous studies mentioned, pronounced primacy and recency effect were found. Further research...16 2.3.2 Sound Onset and Precedence Effect
the surrounding space and the location and position of our own body within it. Thus, it is the multisensory awareness of being immersed in a specific...improves situational awareness, speech perception, and sound source identification in the presence of other sound sources (e.g., Bronkhorst, 2000; Kidd et...ventriloquism effect (VE) (Howard and Templeton, 1966) in which the listener perceives the ventriloquist’s speech as coming from ventriloquist’s dummy. The
Lineweaver, Tara T; Kercood, Suneeta; O'Keeffe, Nicole B; O'Brien, Kathleen M; Massey, Eric J; Campbell, Samantha J; Pierce, Jenna N
Two studies addressed how young adult college students with attention deficit hyperactivity disorder (ADHD) (n = 44) compare to their nonaffected peers (n = 42) on tests of auditory and visual-spatial working memory (WM), are vulnerable to auditory and visual distractions, and are affected by a simple intervention. Students with ADHD demonstrated worse auditory WM than did controls. A near significant trend indicated that auditory distractions interfered with the visual WM of both groups and that, whereas controls were also vulnerable to visual distractions, visual distractions improved visual WM in the ADHD group. The intervention was ineffective. Limited correlations emerged between self-reported ADHD symptoms and objective test performances; students with ADHD who perceived themselves as more symptomatic often had better WM and were less vulnerable to distractions than their ADHD peers.
Liston, Matthew B; Bergmann, Jeroen H; Keating, Niamh; Green, David A; Pavlou, Marousa
Many daily activities require appropriate allocation of attention between postural and cognitive tasks (i.e. dual-tasking) to be carried out effectively. Processing multiple streams of spatial information is important for everyday tasks such as road crossing. Fifteen community-dwelling healthy older (mean age=78.3, male=1) and twenty younger adults (mean age=25.3, male=6) completed a novel bimodal spatial multi-task test providing contextually similar spatial information via separate sensory modalities to investigate effects on postural prioritization. Two tasks, a temporally random visually coded spatial step navigation task (VS) and a regular auditory-coded spatial congruency task (AS) were performed independently (single task) and in combination (multi-task). Response time, accuracy and dual-task costs (% change in multi-task condition) were determined. Results showed a significant 3-way interaction between task type (VS vs. AS), complexity (single vs. multi) and age group for both response time (p ≤ 0.01) and response accuracy (p ≤ 0.05) with older adults performing significantly worse than younger adults. Dual-task costs were significantly greater for older compared to younger adults in the VS step task for both response time (p ≤ 0.01) and accuracy (p ≤ 0.05) indicating prioritization of the AS over the VS stepping task in older adults. Younger adults display greater AS task response time dual task costs compared to older adults (p ≤ 0.05) indicating VS task prioritization in agreement with the posture first strategy. Findings suggest that novel dual modality spatial testing may lead to adoption of postural strategies that deviate from posture first, particularly in older people. Adoption of previously unreported postural prioritization strategies may influence balance control in older people. Copyright © 2013 Elsevier B.V. All rights reserved.
Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand
In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener's ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sensitivity of single neurons in the auditory midbrain of anesthetized cats follows a similar time course, although onset dominance in temporal response patterns results in more robust directional sensitivity than expected, suggesting a simple mechanism for improving directional sensitivity in reverberation. In parallel behavioral experiments, we demonstrate that human lateralization judgments are consistent with predictions from a population rate model decoding the observed midbrain responses, suggesting a subcortical origin for robust sound localization in reverberant environments.
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes fo...
Schwartz, Andrew H; Shinn-Cunningham, Barbara G
Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.
Tomasz Maciej Rutkowski
Full Text Available The paper reviews nine robotic and virtual reality (VR brain-computer interface (BCI projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP, which constitutes an internet of things (IoT control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Rutkowski, Tomasz M
The paper reviews nine robotic and virtual reality (VR) brain-computer interface (BCI) projects developed by the author, in collaboration with his graduate students, within the BCI-lab research group during its association with University of Tsukuba, Japan. The nine novel approaches are discussed in applications to direct brain-robot and brain-virtual-reality-agent control interfaces using tactile and auditory BCI technologies. The BCI user intentions are decoded from the brainwaves in realtime using a non-invasive electroencephalography (EEG) and they are translated to a symbiotic robot or virtual reality agent thought-based only control. A communication protocol between the BCI output and the robot or the virtual environment is realized in a symbiotic communication scenario using an user datagram protocol (UDP), which constitutes an internet of things (IoT) control scenario. Results obtained from healthy users reproducing simple brain-robot and brain-virtual-agent control tasks in online experiments support the research goal of a possibility to interact with robotic devices and virtual reality agents using symbiotic thought-based BCI technologies. An offline BCI classification accuracy boosting method, using a previously proposed information geometry derived approach, is also discussed in order to further support the reviewed robotic and virtual reality thought-based control paradigms.
Pak, Richard; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D
Age-related differences in spatial ability have been suggested as a mediator of age-related differences in computer-based task performance. However, the vast majority of tasks studied have primarily used a visual display (e.g., graphical user interfaces). In the current study, the relationship between spatial ability and performance in a non-visual computer-based navigation task was examined in a sample of 196 participants ranging in age from 18 to 91. Participants called into a simulated interactive voice response system and carried out a variety of transactions. They also completed measures of attention, working memory, and spatial abilities. The results showed that age-related differences in spatial ability predicted a significant amount of variance in performance in the non-visual computer task, even after controlling for other abilities. Understanding the abilities that influence performance with technology may provide insight into the source of age-related performance differences in the successful use of technology.
Full Text Available Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth. Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task or answer multiple choice questions related to the target material (speech comprehension task. The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time was also significantly correlated with recall accuracy. Overall, this study highlights i the listening costs associated with shifts in spatial attention and ii the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi
Lin, Gaven; Carlile, Simon
Following a multi-talker conversation relies on the ability to rapidly and efficiently shift the focus of spatial attention from one talker to another. The current study investigated the listening costs associated with shifts in spatial attention during conversational turn-taking in 16 normally-hearing listeners using a novel sentence recall task. Three pairs of syntactically fixed but semantically unpredictable matrix sentences, recorded from a single male talker, were presented concurrently through an array of three loudspeakers (directly ahead and +/-30° azimuth). Subjects attended to one spatial location, cued by a tone, and followed the target conversation from one sentence to the next using the call-sign at the beginning of each sentence. Subjects were required to report the last three words of each sentence (speech recall task) or answer multiple choice questions related to the target material (speech comprehension task). The reading span test, attention network test, and trail making test were also administered to assess working memory, attentional control, and executive function. There was a 10.7 ± 1.3% decrease in word recall, a pronounced primacy effect, and a rise in masker confusion errors and word omissions when the target switched location between sentences. Switching costs were independent of the location, direction, and angular size of the spatial shift but did appear to be load dependent and only significant for complex questions requiring multiple cognitive operations. Reading span scores were positively correlated with total words recalled, and negatively correlated with switching costs and word omissions. Task switching speed (Trail-B time) was also significantly correlated with recall accuracy. Overall, this study highlights (i) the listening costs associated with shifts in spatial attention and (ii) the important role of working memory in maintaining goal relevant information and extracting meaning from dynamic multi-talker conversations.
Wang, Hongyan; Zhang, Gaoyan; Liu, Baolin
Semantic priming is an important research topic in the field of cognitive neuroscience. Previous studies have shown that the uni-modal semantic priming effect can be modulated by attention. However, the influence of attention on cross-modal semantic priming is unclear. To investigate this issue, the present study combined a cross-modal semantic priming paradigm with an auditory spatial attention paradigm, presenting the visual pictures as the prime stimuli and the semantically related or unrelated sounds as the target stimuli. Event-related potentials results showed that when the target sound was attended to, the N400 effect was evoked. The N400 effect was also observed when the target sound was not attended to, demonstrating that the cross-modal semantic priming effect persists even though the target stimulus is not focused on. Further analyses revealed that the N400 effect evoked by the unattended sound was significantly lower than the effect evoked by the attended sound. This contrast provides new evidence that the cross-modal semantic priming effect can be modulated by attention.
Full Text Available Both judgment and receptivity are important to optimal politics, and both are important to each other. In making this argument, I use an Arendtian conception of judgment and take mindfulness as an example of receptivity. I argue that receptivity offers a needed dimension to addressing the puzzles of what makes Arendtian judgment possible, and that judgment provides a necessary complement to receptivity for action in the world. Exploring this complementary relation between judgment and receptivity also reveals a surprising similarity between what each offers to the practice of politics, in particular to freedom and the possibility of transformation. At the same time, I argue, these important contributions to politics are best understood and realized if judgment and receptivity are thought of as distinct forms of relating to the world.
Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial
Jacobson, Mark W; Delis, Dean C; Bondi, Mark W; Salmon, David P
Some studies of elderly individuals with the ApoE-e4 genotype noted subtle deficits on tests of attention such as the WAIS-R Digit Span subtest, but these findings have not been consistently reported. One possible explanation for the inconsistent results could be the presence of subgroups of e4+ individuals with asymmetric cognitive profiles (i.e., significant discrepancies between verbal and visuospatial skills). Comparing genotype groups with individual, modality-specific tests might obscure subtle differences between verbal and visuospatial attention in these asymmetric subgroups. In this study, we administered the WAIS-R Digit Span and WMS-R Visual Memory Span subtests to 21 nondemented elderly e4+ individuals and 21 elderly e4- individuals matched on age, education, and overall cognitive ability. We hypothesized that a) the e4+ group would show a higher incidence of asymmetric cognitive profiles when comparing Digit Span/Visual Memory Span performance relative to the e4- group; and (b) an analysis of individual test performance would fail to reveal differences between the two subject groups. Although the groups' performances were comparable on the individual attention span tests, the e4+ group showed a significantly larger discrepancy between digit span and spatial span scores compared to the e4- group. These findings suggest that contrast measures of modality-specific attentional skills may be more sensitive to subtle group differences in at-risk groups, even when the groups do not differ on individual comparisons of standardized test means. The increased discrepancy between verbal and visuospatial attention may reflect the presence of "subgroups" within the ApoE-e4 group that are qualitatively similar to asymmetric subgroups commonly associated with the earliest stages of AD.
Sproglæringsteoretisk værktøj til udvikling af IT-støttede materialer og programmer inden for sproglig reception......Sproglæringsteoretisk værktøj til udvikling af IT-støttede materialer og programmer inden for sproglig reception...
volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....
A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative
Novak, Joseph D.
Presented is a paradigm for science education research. The paradigm advances the reception learning theory, where regularities to be learned are presented explicitly to the learner. A tool for the study of knowledge production in science education, the Gowin "V," is presented. (RE)
Paulo, Ana; Zaal, Frank T J M; Fonseca, Sofia; Araujo, Duarte
Serve and serve-reception performance have predicted success in volleyball. Given the impact of serve-reception on the game, we aimed at understanding what it is in the serve and receiver's actions that determines the selection of the type of pass used in serve-reception and its efficacy. Four
Dai, Lengshi; Shinn-Cunningham, Barbara G
Listeners with normal hearing thresholds (NHTs) differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in the cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding), onset event-related potentials (ERPs) from the scalp (reflecting cortical responses to sound) and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones); however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance), inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with NHTs can arise due to both subcortical coding differences and differences in attentional control, depending on stimulus characteristics
Full Text Available Listeners with normal hearing thresholds differ in their ability to steer attention to whatever sound source is important. This ability depends on top-down executive control, which modulates the sensory representation of sound in cortex. Yet, this sensory representation also depends on the coding fidelity of the peripheral auditory system. Both of these factors may thus contribute to the individual differences in performance. We designed a selective auditory attention paradigm in which we could simultaneously measure envelope following responses (EFRs, reflecting peripheral coding, onset event-related potentials from the scalp (ERPs, reflecting cortical responses to sound, and behavioral scores. We performed two experiments that varied stimulus conditions to alter the degree to which performance might be limited due to fine stimulus details vs. due to control of attentional focus. Consistent with past work, in both experiments we find that attention strongly modulates cortical ERPs. Importantly, in Experiment I, where coding fidelity limits the task, individual behavioral performance correlates with subcortical coding strength (derived by computing how the EFR is degraded for fully masked tones compared to partially masked tones; however, in this experiment, the effects of attention on cortical ERPs were unrelated to individual subject performance. In contrast, in Experiment II, where sensory cues for segregation are robust (and thus less of a limiting factor on task performance, inter-subject behavioral differences correlate with subcortical coding strength. In addition, after factoring out the influence of subcortical coding strength, behavioral differences are also correlated with the strength of attentional modulation of ERPs. These results support the hypothesis that behavioral abilities amongst listeners with normal hearing thresholds can arise due to both subcortical coding differences and differences in attentional control, depending on
Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...
Slevc, L Robert; Shell, Alison R
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.
Education and Technology Transfer Unit/ETT-EC
Friday 15.10.2004 CERN 50th Anniversary articles will be sold in the Main Building, ground floor on Friday 15th October from 10h00 to 16h00. T-shirt, (S, M, L, XL) 20.- K-way (M, L, XL) 20.- Silk tie (2 models) 30.- Einstein tie 45.- Umbrella 20.- Caran d'Ache pen 5.- 50th Anniversary Pen 5.- Kit of Cartoon Album & Crayons 10.- All the articles are also available at the Reception Shop in Building 33 from Monday to Saturday between 08.30 and 17.00 hrs. Education and Technology Transfer Unit/ETT-EC
Dietz, Birte; Manahan-Vaughan, Denise
Long-term potentiation (LTP) and long-term depression (LTD) are key cellular processes that support memory formation. Whereas increases of synaptic strength by means of LTP may support the creation of a spatial memory 'engram', LTD appears to play an important role in refining and optimising experience-dependent encoding. A differentiation in the role of hippocampal subfields is apparent. For example, LTD in the dentate gyrus (DG) is enabled by novel learning about large visuospatial features, whereas in area CA1, it is enabled by learning about discrete aspects of spatial content, whereby, both discrete visuospatial and olfactospatial cues trigger LTD in CA1. Here, we explored to what extent local audiospatial cues facilitate information encoding in the form of LTD in these subfields. Coupling of low frequency afferent stimulation (LFS) with discretely localised, novel auditory tones in the sonic hearing, or ultrasonic range, facilitated short-term depression (STD) into LTD (>24 h) in CA1, but not DG. Re-exposure to the now familiar audiospatial configuration ca. 1 week later failed to enhance STD. Reconfiguration of the same audiospatial cues resulted anew in LTD when ultrasound, but not non-ultrasound cues were used. LTD facilitation that was triggered by novel exposure to spatially arranged tones, or to spatial reconfiguration of the same tones were both prevented by an antagonism of the metabotropic glutamate receptor, mGlu5. These data indicate that, if behaviourally salient enough, the hippocampus can use audiospatial cues to facilitate LTD that contributes to the encoding and updating of spatial representations. Effects are subfield-specific, and require mGlu5 activation, as is the case for visuospatial information processing. These data reinforce the likelihood that LTD supports the encoding of spatial features, and that this occurs in a qualitative and subfield-specific manner. They also support that mGlu5 is essential for synaptic encoding of spatial
Fallon, James B; Irving, Sam; Pannu, Satinderpall S; Tooker, Angela C; Wise, Andrew K; Shepherd, Robert K; Irvine, Dexter R F
Current source density analysis of recordings from penetrating electrode arrays has traditionally been used to examine the layer- specific cortical activation and plastic changes associated with changed afferent input. We report on a related analysis, the second spatial derivative (SSD) of surface local field potentials (LFPs) recorded using custom designed thin-film polyimide substrate arrays. SSD analysis of tone- evoked LFPs generated from the auditory cortex under the recording array demonstrated a stereotypical single local minimum, often flanked by maxima on both the caudal and rostral sides. In contrast, tone-pips at frequencies not represented in the region under the array, but known (on the basis of normal tonotopic organization) to be represented caudal to the recording array, had a more complex pattern of many sources and sinks. Compared to traditional analysis of LFPs, SSD analysis produced a tonotopic map that was more similar to that obtained with multi-unit recordings in a normal-hearing animal. Additionally, the statistically significant decrease in the number of acoustically responsive cortical locations in partially deafened cats following 6 months of cochlear implant use compared to unstimulated cases observed with multi-unit data (p=0.04) was also observed with SSD analysis (p=0.02), but was not apparent using traditional analysis of LFPs (p=0.6). SSD analysis of surface LFPs from the thin-film array provides a rapid and robust method for examining the spatial distribution of cortical activity with improved spatial resolution compared to more traditional LFP recordings. Copyright © 2016 Elsevier B.V. All rights reserved.
Bernstein, Joshua G. W.; Danielsson, Henrik; H?llgren, Mathias; Stenfelt, Stefan; R?nnberg, Jerker; Lunner, Thomas
The audiogram predicts amp;lt;30% of the variance in speech-reception thresholds (SRTs) for hearing-impaired (HI) listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function-spectrotemporal modulation (STM) sensitivity-and SRTs in noise was examined for 154 HI listeners fitted with...
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform-Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.
To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644
Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.
Devore, Sasha; Ihlefeld, Antje; Hancock, Kenneth; Shinn-Cunningham, Barbara; Delgutte, Bertrand
In reverberant environments, acoustic reflections interfere with the direct sound arriving at a listener’s ears, distorting the spatial cues for sound localization. Yet, human listeners have little difficulty localizing sounds in most settings. Because reverberant energy builds up over time, the source location is represented relatively faithfully during the early portion of a sound, but this representation becomes increasingly degraded later in the stimulus. We show that the directional sens...
Some might argue that reception analysis is a remnant of the past in an age where “people formerly known as the audience” (Rosen, 2006) are producing and circulating content on a diversity of interactive and participatory media platforms. Far from being the case, reception research must continue...... on social media can help us better understand the participatory media culture that has established itself over the past decade. To properly address the question of meaning, however, reception research needs to be adapted to the current media landscape. Taking my point of departure in the multi...... (motivation, comprehension, discrimination, position, implementation) for their relevance and explanatory power in today’s media landscape, suggesting new interpretations and new formulations. A revision of reception research does not only concern the notion of reception itself, but also that of the text...
Temchin, Andrei N; Recio-Spinoso, Alberto; Cai, Hongxue; Ruggero, Mario A
Spatial magnitude and phase profiles for inner hair cell (IHC) depolarization throughout the chinchilla cochlea were inferred from responses of auditory-nerve fibers (ANFs) to threshold- and moderate-level tones and tone complexes. Firing-rate profiles for frequencies ≤2 kHz are bimodal, with the major peak at the characteristic place and a secondary peak at 3-5 mm from the extreme base. Response-phase trajectories are synchronous with peak outward stapes displacement at the extreme cochlear base and accumulate 1.5 period lags at the characteristic places. High-frequency phase trajectories are very similar to the trajectories of basilar-membrane peak velocity toward scala tympani. Low-frequency phase trajectories undergo a polarity flip in a region, 6.5-9 mm from the cochlear base, where traveling-wave phase velocity attains a local minimum and a local maximum and where the onset latencies of near-threshold impulse responses computed from responses to near-threshold white noise exhibit a local minimum. That region is the same where frequency-threshold tuning curves of ANFs undergo a shape transition. Since depolarization of IHCs presumably indicates the mechanical stimulus to their stereocilia, the present results suggest that distinct low-frequency forward waves of organ of Corti vibration are launched simultaneously at the extreme base of the cochlea and at the 6.5-9 mm transition region, from where antiphasic reflections arise.
Full Text Available Serve and serve-reception performance have predicted success in volleyball. Given the impact of serve-reception on the game, we aimed at understanding what it is in the serve and receiver’s actions that determines the selection of the type of pass used in serve-reception and its efficacy. Four high-level players received jump-float serves from four servers in two reception zones – zone 1 and 5. The ball and the receiver’s head were tracked with two video cameras, allowing 3D world-coordinates reconstruction. Logistic-regression models were used to predict the type of pass used (overhand or underhand and serve-reception efficacy (error, out, or effective from variables related with the serve kinematics and related with the receiver’s on-court positioning and movement. Receivers’ initial position was different when in zone 1 and 5. This influenced the serve-related variables as well as the type of pass used. Strong predictors of using an underhand rather than overhand pass were higher ball contact of the server, reception in zone 1, receiver’s initial position more to the back of the court and backward receiver movement. Receiver’s larger longitudinal displacements and an initial position more to the back of the court had a strong relationship with the decreasing of the serve-reception efficacy. Receivers’ positioning and movement were the factors with the largest impact on the type of pass used and the efficacy of the reception. Reception zone affected the variance in the ball’s kinematics (with the exception of the ball’s lateral displacement, as well as in the receivers’ positioning (distances from the net and from the target. Also the reception zone was associated with the type of pass used by the receiver but not with reception efficacy. Given volleyball’s rotation rule, the receiver needs to master receiving in the different reception zones; he/she needs to adapt to the diverse constraints of each zone to maintain
Paulo, Ana; Zaal, Frank T. J. M.; Fonseca, Sofia; Araújo, Duarte
Serve and serve-reception performance have predicted success in volleyball. Given the impact of serve-reception on the game, we aimed at understanding what it is in the serve and receiver's actions that determines the selection of the type of pass used in serve-reception and its efficacy. Four high-level volleyball players received jump-float serves from four servers in two reception zones—zone 1 and 5. The ball and the receiver's head were tracked with two video cameras, allowing 3D world-coordinates reconstruction. Logistic-regression models were used to predict the type of pass used (overhand or underhand) and serve-reception efficacy (error, out, or effective) from variables related with the serve kinematics and related with the receiver's on-court positioning and movement. Receivers' initial position was different when in zone 1 and 5. This influenced the serve-related variables as well as the type of pass used. Strong predictors of using an underhand rather than overhand pass were higher ball contact of the server, reception in zone 1, receiver's initial position more to the back of the court and backward receiver movement. Receiver's larger longitudinal displacements and an initial position more to the back of the court had a strong relationship with the decreasing of the serve-reception efficacy. Receivers' positioning and movement were the factors with the largest impact on the type of pass used and the efficacy of the reception. Reception zone affected the variance in the ball's kinematics (with the exception of the ball's lateral displacement), as well as in the receivers' positioning (distances from the net and from the target). Also the reception zone was associated with the type of pass used by the receiver but not with reception efficacy. Given volleyball's rotation rule, the receiver needs to master receiving in the different reception zones; he/she needs to adapt to the diverse constraints of each zone to maintain performance efficacy. Thus
Vedora, Joseph; Barry, Tiffany
The current study extended research on picture prompts by using them with a progressive prompt delay to teach receptive labeling of pictures to 2 teenagers with autism. The procedure differed from prior research because the auditory stimulus was not presented or was presented only once during the picture-prompt condition. The results indicated…
Petursdottir, Anna Ingeborg; Aguilar, Gabriella
Receptive identification is usually taught in matching-to-sample format, which entails the presentation of an auditory sample stimulus and several visual comparison stimuli in each trial. Conflicting recommendations exist regarding the order of stimulus presentation in matching-to-sample trials. The purpose of this study was to compare acquisition…
Simon, Jonathan Z
Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.
Cooper, Neil, fl 1983-2004, photographer
A slide showing Indira Gandhi, Indian Prime Minister and Chair of the Conference, Kenneth Kaunda, President of Zambia, and Robert and Sally Mugabe, President and First Lady of Zimbabwe, in discussion at the President's Reception.
At a reception on 28 January, the CERN management presented their best wishes for 2009 to politicians and representatives of the administrations in the local area, and diplomats representing CERN’s Member States, Observer States and other countries.
Martin, Stephanie; Mikutta, Christian; Leonard, Matthew K; Hungate, Dylan; Koelsch, Stefan; Shamma, Shihab; Chang, Edward F; Millán, José Del R; Knight, Robert T; Pasley, Brian N
Despite many behavioral and neuroimaging investigations, it remains unclear how the human cortex represents spectrotemporal sound features during auditory imagery, and how this representation compares to auditory perception. To assess this, we recorded electrocorticographic signals from an epileptic patient with proficient music ability in 2 conditions. First, the participant played 2 piano pieces on an electronic piano with the sound volume of the digital keyboard on. Second, the participant replayed the same piano pieces, but without auditory feedback, and the participant was asked to imagine hearing the music in his mind. In both conditions, the sound output of the keyboard was recorded, thus allowing precise time-locking between the neural activity and the spectrotemporal content of the music imagery. This novel task design provided a unique opportunity to apply receptive field modeling techniques to quantitatively study neural encoding during auditory mental imagery. In both conditions, we built encoding models to predict high gamma neural activity (70-150 Hz) from the spectrogram representation of the recorded sound. We found robust spectrotemporal receptive fields during auditory imagery with substantial, but not complete overlap in frequency tuning and cortical location compared to receptive fields measured during auditory perception. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: firstname.lastname@example.org.
Korkman, M; Granström, M L; Appelqvist, K; Liukkonen, E
The Landau-Kleffner Syndrome (LKS) is characterized by acquired receptive aphasia and EEG abnormality with onset between the ages of 3 and 8 years. This study presents neuropsychological assessments in 5 children with LKS. The aims were (1) to specify the neuropsychological deficits characteristic of these children; and (2) to clarify the nature of the receptive aphasia by comparing nonverbal and verbal auditory discrimination. Receptive aphasia was present in all children. Retardation, poor motor coordination, hyperkinesia, and conduct problems were frequent but variable. All children exhibited a dissociation between the discrimination of environmental sounds and phonological auditory discrimination, the latter being more impaired than the former. This suggests that the primary deficit of the receptive aphasia is an impairment of auditory phonological discrimination rather than a generalized auditory agnosia.
Members of the personnel are invited to take note that only parcels corresponding to official orders or contracts will be handled at CERN. Individuals are not authorised to have private merchandise delivered to them at CERN and private deliveries will not be accepted by the Goods Reception services. Thank you for your understanding.
Cortis Mack, Cathleen; Dent, Kevin; Ward, Geoff
Three experiments examined the immediate free recall (IFR) of auditory-verbal and visuospatial materials from single-modality and dual-modality lists. In Experiment 1, we presented participants with between 1 and 16 spoken words, with between 1 and 16 visuospatial dot locations, or with between 1 and 16 words "and" dots with synchronized…
Lankinen, Asa; Hellriegel, Barbara; Bernasconi, Giorgina
In flowering plants, the onset and duration of female receptivity vary among species. In several species the receptive structures wilt upon pollination. Here we explore the hypothesis that postpollination wilting may be influenced by pollen and serve as a general means to secure paternity of the pollen donor at the expense of female fitness. Taking a game-theoretical approach, we examine the potential for the evolution of a pollen-borne wilting substance, and for the coevolution of a defense strategy by the recipient plant. The model without defense predicts an evolutionarily stable strategy (ESS) for the production of wilting substance. The ESS value is highest when pollinator visiting rates are intermediate and when the probability that pollen from several donors arrives at the same time is low. This finding has general implications in that it shows that male traits to secure paternity also can evolve in species, such as plants, where mating is not strictly sequential. We further model coevolution of the wilting substance with the timing of stigma receptivity. We assume that pollen-receiving plants can reduce the costs induced by toxic pollen by delaying the onset of stigmatic receptivity. The model predicts a joint ESS, but no female counter-adaptation when the wilting substance is highly toxic. This indicates that toxicity affects the probability that a male manipulative trait stays beneficial (i.e., not countered by female defense) over evolutionary time. We discuss parallels to male induced changes in female receptivity known to occur in animals and the role of harm for the evolution of male manipulative adaptations.
Razak, Khaleel A; Yarrow, Stuart; Brewton, Dustin
The auditory cortex is necessary for sound localization. The mechanisms that shape bicoordinate spatial representation in the auditory cortex remain unclear. Here, we addressed this issue by quantifying spatial receptive fields (SRFs) in two functionally distinct cortical regions in the pallid bat. The pallid bat uses echolocation for obstacle avoidance and listens to prey-generated noise to localize prey. Its cortex contains two segregated regions of response selectivity that serve echolocation and localization of prey-generated noise. The main aim of this study was to compare 2D SRFs between neurons in the noise-selective region (NSR) and the echolocation region [frequency-modulated sweep-selective region (FMSR)]. The data reveal the following major differences between these two regions: (1) compared with NSR neurons, SRF properties of FMSR neurons were more strongly dependent on sound level; (2) as a population, NSR neurons represent a broad region of contralateral space, while FMSR selectivity was focused near the midline at sound levels near threshold and expanded considerably with increasing sound levels; and (3) the SRF size and centroid elevation were correlated with the characteristic frequency in the NSR, but not the FMSR. These data suggest different mechanisms of sound localization for two different behaviors. Previously, we reported that azimuth is represented by predictable changes in the extent of activated cortex. The present data indicate how elevation constrains this activity pattern. These data suggest a novel model for bicoordinate spatial representation that is based on the extent of activated cortex resulting from the overlap of binaural and tonotopic maps. Unlike the visual and somatosensory systems, spatial information is not directly represented at the sensory receptor epithelium in the auditory system. Spatial locations are computed by integrating neural binaural properties and frequency-dependent pinna filtering, providing a useful model
Tremblay, Sebastien; Parmentier, Fabrice B. R.; Guerard, Katherine; Nicholls, Alastair P.; Jones, Dylan M.
In 2 experiments, the authors tested whether the classical modality effect--that is, the stronger recency effect for auditory items relative to visual items--can be extended to the spatial domain. An order reconstruction task was undertaken with four types of material: visual-spatial, auditory-spatial, visual-verbal, and auditory-verbal.…
Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed
Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...
Van Esch, T E M; Lutman, M E; Vormann, M; Lyzenga, J; Hällgren, M; Larsby, B; Athalye, S P; Houtgast, T; Kollmeier, B; Dreschler, W A
The aim of the present study was to investigate how well the virtual psychophysical measures of spatial hearing from the preliminary auditory profile predict self-reported spatial-hearing abilities. Virtual spatial-hearings tests (conducted unaided, via headphones) and a questionnaire were administered in five centres in Germany, the Netherlands, Sweden, and the UK. Correlations and stepwise linear regression models were calculated among a group of hearing-impaired listeners. Thirty normal-hearing listeners aged 19-39 years, and 72 hearing-impaired listeners aged 22-91 years with a broad range of hearing losses, including asymmetrical and mixed hearing losses. Several significant correlations (between 0.24 and 0.54) were found between results of virtual psychophysical spatial-hearing tests and self-reported localization abilities. Stepwise linear regression analyses showed that the minimum audible angle (MAA) test was a significant predictor for self-reported localization abilities (5% extra explained variance), and the spatial speech reception threshold (SRT) benefit test for self-reported listening to speech in spatial situations (6% extra explained variance). The MAA test and spatial SRT benefit test are indicative measures of everyday binaural functioning. The binaural SRT benefit test was not found to predict self-reported spatial-hearing abilities.
Hackett, Troy A; Rinaldi Barkat, Tania; O'Brien, Barbara M J
The mouse sensory neocortex is reported to lack several hallmark features of topographic organization such as ocular dominance and orientation columns in primary visual cortex or fine-scale tonotopy in primary auditory cortex (AI). Here, we re-examined the question of auditory functional topography...... by aligning ultra-dense receptive field maps from the auditory cortex and thalamus of the mouse in vivo with the neural circuitry contained in the auditory thalamocortical slice in vitro. We observed precisely organized tonotopic maps of best frequency (BF) in the middle layers of AI and the anterior auditory...... field as well as in the ventral and medial divisions of the medial geniculate body (MGBv and MGBm, respectively). Tracer injections into distinct zones of the BF map in AI retrogradely labeled topographically organized MGBv projections and weaker, mixed projections from MGBm. Stimulating MGBv along...
The present study investigates (i) English as Foreign Language (EFL) learners' receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii) how much receptive collocational knowledge is acquired as linguistic proficiency develops; and (iii) the extent to which receptive knowledge of ...
Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar
Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.
Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.
Vercillo, Tiziana; Burr, David; Gori, Monica
A recent study has shown that congenitally blind adults, who have never had visual experience, are impaired on an auditory spatial bisection task (Gori, Sandini, Martinoli, & Burr, 2014). In this study we investigated how thresholds for auditory spatial bisection and auditory discrimination develop with age in sighted and congenitally blind…
Zeidman, Peter; Silson, Edward Harry; Schwarzkopf, Dietrich Samuel; Baker, Chris Ian; Penny, Will
We introduce a probabilistic (Bayesian) framework and associated software toolbox for mapping population receptive fields (pRFs) based on fMRI data. This generic approach is intended to work with stimuli of any dimension and is demonstrated and validated in the context of 2D retinotopic mapping. The framework enables the experimenter to specify generative (encoding) models of fMRI timeseries, in which experimental stimuli enter a pRF model of neural activity, which in turns drives a nonlinear model of neurovascular coupling and Blood Oxygenation Level Dependent (BOLD) response. The neuronal and haemodynamic parameters are estimated together on a voxel-by-voxel or region-of-interest basis using a Bayesian estimation algorithm (variational Laplace). This offers several novel contributions to receptive field modelling. The variance/covariance of parameters are estimated, enabling receptive fields to be plotted while properly representing uncertainty about pRF size and location. Variability in the haemodynamic response across the brain is accounted for. Furthermore, the framework introduces formal hypothesis testing to pRF analysis, enabling competing models to be evaluated based on their log model evidence (approximated by the variational free energy), which represents the optimal tradeoff between accuracy and complexity. Using simulations and empirical data, we found that parameters typically used to represent pRF size and neuronal scaling are strongly correlated, which is taken into account by the Bayesian methods we describe when making inferences. We used the framework to compare the evidence for six variants of pRF model using 7 T functional MRI data and we found a circular Difference of Gaussians (DoG) model to be the best explanation for our data overall. We hope this framework will prove useful for mapping stimulus spaces with any number of dimensions onto the anatomy of the brain. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Full Text Available It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1, we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency and the overall shape of tonal receptive field, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Benito-Gonzalez, Ana; Doetzlhofer, Angelika
Mechano-sensory hair cells (HCs), housed in the inner ear cochlea, are critical for the perception of sound. In the mammalian cochlea, differentiation of HCs occurs in a striking basal-to-apical and medial-to-lateral gradient, which is thought to ensure correct patterning and proper function of the auditory sensory epithelium. Recent studies have revealed that Hedgehog signaling opposes HC differentiation and is critical for the establishment of the graded pattern of auditory HC differentiation. However, how Hedgehog signaling interferes with HC differentiation is unknown. Here, we provide evidence that in the murine cochlea, Hey1 and Hey2 control the spatiotemporal pattern of HC differentiation downstream of Hedgehog signaling. It has been recently shown that HEY1 and HEY2, two highly redundant HES-related transcriptional repressors, are highly expressed in supporting cell (SC) and HC progenitors (prosensory cells), but their prosensory function remained untested. Using a conditional double knock-out strategy, we demonstrate that prosensory cells form and proliferate properly in the absence of Hey1 and Hey2 but differentiate prematurely because of precocious upregulation of the pro-HC factor Atoh1. Moreover, we demonstrate that prosensory-specific expression of Hey1 and Hey2 and its subsequent graded downregulation is controlled by Hedgehog signaling in a largely FGFR-dependent manner. In summary, our study reveals a critical role for Hey1 and Hey2 in prosensory cell maintenance and identifies Hedgehog signaling as a novel upstream regulator of their prosensory function in the mammalian cochlea. The regulatory mechanism described here might be a broadly applied mechanism for controlling progenitor behavior in the central and peripheral nervous system. Copyright © 2014 the authors 0270-6474/14/3412865-12$15.00/0.
Shoumaker, R D; Ajax, E T; Schenkenberg, T
The selective inability to comprehend the spoken word, in the absence of aphasia or defective or defective hearing, is defined as pure word deafness (auditory verbal agnosia). Reported cases of this rare disorder have suggested the site of involvement to be strategically placed, interrupting fibers from left and right primary auditory receptive areas which project to Wernicke's are in the dominant hemisphere. Our patient is a 44-year-old male who suffered from an uncertain illness complicated by fever, jaundice and generalized seizures seven years previously. Following an apparent convulsion, the patient was noted to be unable to understand spoken language without loss of ability to recognize and respond to sounds or marked impairment of speech or reading. The evidence suggested bilateral cerebral hemisphere disease more marked on the right. The abrupt onset without progression is consistent with a vascular or ischemic etiology. Conclusions about the nature of the lesion and areas involved must await further studies and ultimately tissue examination.
Full Text Available The object of study case is to analyze the quality of the logistics department, focusing on the audit process. Purpose of this paper is to present the advantages resulting from the systematic audit processes and methods of analysis and improvement of nonconformities found. The case study is realised at SC Miele Tehnica SRL Brasov, twelfth production line, and the fourth from outside Germany. The specific objectives are: clarifying the concept of audit quality, emphasizing requirements ISO 19011:2003 "Guidelines for auditing quality management systems and / or environment" on audits; cchieving quality audit and performance analysis; improved process performance reception materials; compliance with legislation and auditing standards applicable in EU and Romania.
Procko, Carl; Lu, Yun; Shaham, Shai
Neuronal receptive endings, such as dendritic spines and sensory protrusions, are structurally remodeled by experience. How receptive endings acquire their remodeled shapes is not well understood. In response to environmental stressors, the nematode Caenorhabditis elegans enters a diapause state, termed dauer, which is accompanied by remodeling of sensory neuron receptive endings. Here, we demonstrate that sensory receptive endings of the AWC neurons in dauers remodel in the confines of a compartment defined by the amphid sheath (AMsh) glial cell that envelops these endings. AMsh glia remodel concomitantly with and independently of AWC receptive endings to delimit AWC receptive ending growth. Remodeling of AMsh glia requires the OTD/OTX transcription factor TTX-1, the fusogen AFF-1 and probably the vascular endothelial growth factor (VEGFR)-related protein VER-1, all acting within the glial cell. ver-1 expression requires direct binding of TTX-1 to ver-1 regulatory sequences, and is induced in dauers and at high temperatures. Our results demonstrate that stimulus-induced changes in glial compartment size provide spatial constraints on neuronal receptive ending growth.
Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Plakke, Bethany; Romanski, Lizabeth M.
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931
Members of the personnel are invited to take note that only parcels corresponding to official orders or contracts will be handled at CERN. Individuals are not authorised to have private merchandise delivered to them at CERN and private deliveries will not be accepted by the Goods Reception services. Goods Reception Services
Percy-Smith, Lone; Tønning, Tenna Lindbjerg; Josvassen, Jane Lignel
subjects, respectively. The two cohorts had different speech and language intervention following cochlear implantation, i.e. standard habilitation vs. auditory verbal (AV) intervention. Three tests of speech and language were applied covering language areas of receptive and productive vocabulary...... and Social Affairs recommend basing the habilitation on principles from AV practice. It should be noted, that a minority of children use spoken language with sign support. For this group it is, however, still important that educational services provide auditory skills training....
Full Text Available The target of this work was verified effect of transport females in the car for advance state of receptivity in young females broiler rabbits. We used nulliparous females of broiler hybrid HYCOLE (age 4-5 months, weight 3.5-3.8 kg. Experiment was realizated twice. First in half of November (31 females, second in half of February (32 females. Females was layed individually in boxes. After they were transported by car 1 hour (50 km. Before and after experiment we detected state of receptivity in females with coloration of vulva. The state of receptivity was determited from 1 for 4 colour of vulva. ( 1 – anemic coloration of vulva, 2- pink, 3 – red, 4- violet. We detected positive state of transport, on the receptivity. In November before transport was average of receptivity 1.87, after transport 2.25. The state of receptivity will be improve in 12 females (38.71 %. Improve from 1 to 2 was detected in 4 females, from 2 to 3 in 8 females. Improved from 2 to 4 , or from 3 to 4 wasn´t noticed in this group. The state of receptivity wasn´t changed in 19 females (61.29 %. In the state of receptivity 1 stayed 2 females, in the state 2 stayed 15 females, in the state 3 stayed 2 females and in the state 4 wasn´t any female. In February after the end of experiment, state of receptivity was improved with transport in the car from 2.19 to 2.65. The state of receptivity was improved in 13 females (40.63 %. Improve from 1 to 2 we detected in 1 female, from 2 to 3 we detected in 8 females, from 2 to 4 we detected in 2 females, from 3 to 4 in 2 females. In 19 females (59.38% we don´t noticed change state of receptivity. In the state of receptivity 1 were 2 females, in 2 were 11 females, in 3 were 5 females, in 4 was 1 female.
Robert J Thoma
Full Text Available Functional MRI studies have identified a distributed set of brain activations to be associated with auditory verbal hallucinations (AVH. However, very little is known about how activated brain regions may be linked together into AVH-generating networks. Fifteen volunteers with schizophrenia or schizoaffective disorder pressed buttons to indicate on-set and off-set of AVH during fMRI scanning. When a general linear model (GLM was used to compare BOLD signals during periods in which subjects indicated that they were versus were not experiencing AVH (‘AVH-on’ versus ‘AVH-off’, it revealed AVH-related activity in bilateral inferior frontal and superior temporal regions; the right middle temporal gyrus; and the left insula, supramarginal gyrus, inferior parietal lobule and extra-nuclear white matter. In an effort to identify AVH-related networks, the raw data were also processed using independent component analyses (ICA. Four ICA components were spatially consistent with an a priori network framework based upon published meta-analyses of imaging correlates of AVH. Of these four components, only a network involving bilateral auditory cortices and posterior receptive language areas was significantly and positively correlated with the pattern of AVH-on versus AVH-off. The ICA also identified two additional networks (occipital-temporal and medial pre-frontal, not fully matching the meta-analysis framework, but nevertheless containing nodes reported as active in some studies of AVH. Both networks showed significant AVH-related profiles, but both were most active during AVH-off periods. Overall, the data suggest that AVH generation requires specific and selective activation of auditory cortical and posterior language regions, perhaps coupled to a release of indirect influence by occipital and medial frontal structures.
Zahorik, P.; Brungart, D.S.; Bronkhorst, A.W.
Although auditory distance perception is a critical component of spatial hearing, it has received substantially less scienti.c attention than the directional aspects of auditory localization. Here we summarize current knowledge on auditory distance perception, with special emphasis on recent
Christopher Engelhard; Ralph P Diensthuber; Andreas Möglich; Robert Bittl
.... Using electron-electron double resonance (ELDOR) spectroscopy and site-directed spin labelling, we chart the structural transitions facilitating blue-light reception in the engineered light-oxygen-voltage (LOV...
LANCELOT, CÉLINE; SAMSON, SÉVERINE; AHAD, PIERRE; BAULAC, MICHEL
A bstract : To investigate auditory spatial and nonspatial short‐term memory, a sound location discrimination task and an auditory object discrimination task were used in patients with medial temporal lobe resection...
Updike, C; Thornburg, J D
The effect of recurrent middle ear disease during the first 2 years of life on auditory perceptual skills and reading ability was examined in two groups of 6- and 7-year-old children who were pair-matched by age, gender, socioeconomic status, and receptive vocabulary. Group 1 consisted of children with documented chronic otitis media at an early age, and group 2 had no history of middle ear problems. Tests of auditory perceptual skills and reading ability were administered. Significant differences in performance on all tests of auditory processing ability and reading ability were noted.
Losada, Juan M; Herrero, María
Stigmatic receptivity plays a clear role in pollination dynamics; however, little is known about the factors that confer to a stigma the competence to be receptive for the germination of pollen grains. In this work, a developmental approach is used to evaluate the acquisition of stigmatic receptivity and its relationship with a possible change in arabinogalactan-proteins (AGPs). Flowers of the domestic apple, Malus × domestica, were assessed for their capacity to support pollen germination at different developmental stages. Stigmas from these same stages were characterized morphologically and different AGP epitopes detected by immunocytochemistry. Acquisition of stigmatic receptivity and the secretion of classical AGPs from stigmatic cells occurred concurrently and following the same spatial distribution. While in unpollinated stigmas AGPs appeared unaltered, in cross-pollinated stigmas AGPs epitopes vanished as pollen tubes passed by. The concurrent secretion of AGPs with the acquisition of stigmatic receptivity, together with the differential response in unpollinated and cross-pollinated pistils point out a role of AGPs in supporting pollen tube germination and strongly suggest that secretion of AGPs is associated with the acquisition of stigma receptivity.
Full Text Available The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD, and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets.
Liang, Feixue; Bai, Lin; Tao, Huizhong W.; Zhang, Li I.; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity. PMID:25426029
Liang, Feixue; Bai, Lin; Tao, Huizhong W; Zhang, Li I; Xiao, Zhongju
It is generally thought that background noise can mask auditory information. However, how the noise specifically transforms neuronal auditory processing in a level-dependent manner remains to be carefully determined. Here, with in vivo loose-patch cell-attached recordings in layer 4 of the rat primary auditory cortex (A1), we systematically examined how continuous wideband noise of different levels affected receptive field properties of individual neurons. We found that the background noise, when above a certain critical/effective level, resulted in an elevation of intensity threshold for tone-evoked responses. This increase of threshold was linearly dependent on the noise intensity above the critical level. As such, the tonal receptive field (TRF) of individual neurons was translated upward as an entirety toward high intensities along the intensity domain. This resulted in preserved preferred characteristic frequency (CF) and the overall shape of TRF, but reduced frequency responding range and an enhanced frequency selectivity for the same stimulus intensity. Such translational effects on intensity threshold were observed in both excitatory and fast-spiking inhibitory neurons, as well as in both monotonic and nonmonotonic (intensity-tuned) A1 neurons. Our results suggest that in a noise background, fundamental auditory representations are modulated through a background level-dependent linear shifting along intensity domain, which is equivalent to reducing stimulus intensity.
Reel, Leigh Ann; Hicks, Candace Bourland
Purpose: The authors assessed adult selective auditory attention to determine effects of (a) differences between the vocal/speaking characteristics of different mixed-gender pairs of masking talkers and (b) the rhythmic structure of the language of the competing speech. Method: Reception thresholds for English sentences were measured for 50…
Arne F Meyer
Full Text Available Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to
Meyer, Arne F; Diepenbrock, Jan-Philipp; Happel, Max F K; Ohl, Frank W; Anemüller, Jörn
Analysis of sensory neurons' processing characteristics requires simultaneous measurement of presented stimuli and concurrent spike responses. The functional transformation from high-dimensional stimulus space to the binary space of spike and non-spike responses is commonly described with linear-nonlinear models, whose linear filter component describes the neuron's receptive field. From a machine learning perspective, this corresponds to the binary classification problem of discriminating spike-eliciting from non-spike-eliciting stimulus examples. The classification-based receptive field (CbRF) estimation method proposed here adapts a linear large-margin classifier to optimally predict experimental stimulus-response data and subsequently interprets learned classifier weights as the neuron's receptive field filter. Computational learning theory provides a theoretical framework for learning from data and guarantees optimality in the sense that the risk of erroneously assigning a spike-eliciting stimulus example to the non-spike class (and vice versa) is minimized. Efficacy of the CbRF method is validated with simulations and for auditory spectro-temporal receptive field (STRF) estimation from experimental recordings in the auditory midbrain of Mongolian gerbils. Acoustic stimulation is performed with frequency-modulated tone complexes that mimic properties of natural stimuli, specifically non-Gaussian amplitude distribution and higher-order correlations. Results demonstrate that the proposed approach successfully identifies correct underlying STRFs, even in cases where second-order methods based on the spike-triggered average (STA) do not. Applied to small data samples, the method is shown to converge on smaller amounts of experimental recordings and with lower estimation variance than the generalized linear model and recent information theoretic methods. Thus, CbRF estimation may prove useful for investigation of neuronal processes in response to natural stimuli and
M. M. Ghasemi
Full Text Available Background:The aim of this study was to determine the auditory performance of congenitally deaf children and the effect of cochlear implantation (CI on speech intelligibility.Methods:Aprospective study was undertaken on 47 children in a pediatric tertiary referral center for CI.All children were deaf prelingually and were younger than 8 years of age.They were followed up until 5 years after implantation. Auditory performance was assessed by using the categories of auditory performance (CAP scale and speech intelligibility rating which evaluated the spontaneous speech of each child before and at frequent intervals for five years after implantation.Results:Pre-lingually deaf children showed significant improvement in auditory performance after implantation.Six months after implantation 91% of children had the ability to respond to speech sounds.At the end of year one, 96% of children could discriminate speech sounds and 84% of children who reached the three year interval could understand common phrases without lip-reading. After cochlear implantation,the difference between the speech intelligibility rating increased significantly each year for 3 years (p<0.05 and did not plateau up to 5 years after implantation. The changes in auditory performance and speech development were parallel.Conclusion:The results indicated the ability of cochlear implantations to significantly improve auditory receptive skills and subsequently speech development in young congenitally deaf children.
Karina S Cramer
Full Text Available Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of specialized auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes, and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.
Seeberg, Marie Louise; Bagge, Cecilie; Enger, Truls Andre
Drawing on empirical material from fieldwork among young children living with their families in two Norwegian reception centres for asylum-seekers, this article compares their realities to the norms and realities for other children in Norway. Children's spatial and social situations within the centres stand out in stark contrast to Norwegian…
Mrsic-Flogel, Thomas D; King, Andrew J; Schnupp, Jan W H
Recent studies from our laboratory have indicated that the spatial response fields (SRFs) of neurons in the ferret primary auditory cortex (A1) with best frequencies > or =4 kHz may arise from a largely linear processing of binaural level and spectral localization cues. Here we extend this analysis to investigate how well the linear model can predict the SRFs of neurons with different binaural response properties and the manner in which SRFs change with increases in sound level. We also consider whether temporal features of the response (e.g., response latency) vary with sound direction and whether such variations can be explained by linear processing. In keeping with previous studies, we show that A1 SRFs, which we measured with individualized virtual acoustic space stimuli, expand and shift in direction with increasing sound level. We found that these changes are, in most cases, in good agreement with predictions from a linear threshold model. However, changes in spatial tuning with increasing sound level were generally less well predicted for neurons whose binaural frequency-time receptive field (FTRF) exhibited strong excitatory inputs from both ears than for those in which the binaural FTRF revealed either a predominantly inhibitory effect or no clear contribution from the ipsilateral ear. Finally, we found (in agreement with other authors) that many A1 neurons exhibit systematic response latency shifts as a function of sound-source direction, although these temporal details could usually not be predicted from the neuron's binaural FTRF.
... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.
... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...
Slee, Sean J.; Young, Eric D.
The spatial location of sounds is an important aspect of auditory perception, but the ways in which space is represented are not fully understood. No space map has been found within the primary auditory pathway. However, a space map has been found in the nucleus of the brachium of the inferior colliculus (BIN), which provides a major auditory projection to the superior colliculus. We measured the spectral processing underlying auditory spatial tuning in the BIN of unanesthetized marmoset monkeys. Because neurons in the BIN respond poorly to tones and are broadly tuned, we used a broadband stimulus with random spectral shapes (RSS) from which both spatial receptive fields and frequency sensitivity can be derived. Responses to virtual space (VS) stimuli, based on the animal’s own ear acoustics, were compared with the predictions of a weight-function model of responses to the RSS stimuli. First-order (linear) weight functions had broad spectral tuning (~3 octaves), were excitatory in the contralateral ear, inhibitory in the ipsilateral ear, and biased towards high frequencies. Responses to interaural time differences and spectral cues were relatively weak. In cross-validation tests, the first-order RSS model accurately predicted the measured VS tuning curves in the majority of neurons but was inaccurate in 25% of neurons. In some cases second-order weighting functions led to significant improvements. Finally, we found a significant correlation between the degree of binaural weight asymmetry and the best azimuth. Overall, the results suggest that linear processing of interaural level difference underlies spatial tuning in the BIN. PMID:23447600
Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...
Hari M Bharadwaj
Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory
Obara, Keitaro; O'Hashi, Kazunori; Tanifuji, Manabu
Visual object information is conveyed from V1 to area TE along the ventral visual pathway with increasing receptive field (RF) sizes. The RFs of TE neurons are known to be large, but it is largely unknown how large RFs are shaped along the ventral visual pathway. In this study, we addressed this question in two aspects, static and dynamic mechanisms, by recording neural responses from macaque area TE and V4 to object stimuli presented at various locations in the visual field. As a component related to static mechanisms, we found that in area TE, but not in V4, response latency to objects presented at fovea were different from objects in periphery. As a component of the dynamic mechanisms, we examined effects of spatial attention on the RFs of TE neurons. Spatial attention did not affect response latency but modulated response magnitudes depending on attended location, shifting of the longitudinal axis of RFs toward the attended locations. In standard models of large RF formation, downstream neurons pool information from nearby RFs, and this process is repeated across the visual field and at each step along the ventral visual pathway. The present study revealed that this mechanism is not that simple: 1) different circuit mechanisms for foveal and peripheral visual fields may be situated between V4 and area TE, and 2) spatial attention dynamically changes the shape of RFs.NEW & NOTEWORTHY Receptive fields (RFs) of neurons are progressively increased along the ventral visual pathway so that an RF at the final stage, area TE, covers a large area of the visual field. We explored the mechanism and suggested involvement of parallel circuit mechanisms between V4 and TE for foveal and peripheral parts of visual field. We also found a dynamic component of RF shape formation through attentional modulation of responses in a location-dependent manner. Copyright © 2017 the American Physiological Society.
Brown, Rachel M; Palmer, Caroline
In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.
Full Text Available With so much attention on the issue of voice in democratic theory, the inverse question of how people come to listen remains a marginal one. Recent scholarship in affect and neuroscience reveals that cognitive and verbal strategies, while privileged in democratic politics, are often insufficient to cultivate the receptivity that constitutes the most basic premise of democratic encounters. This article draws on this scholarship and a recent case of forum theatre to examine the conditions of receptivity and responsiveness, and identify specific strategies that foster such conditions. It argues that the forms of encounter most effective in cultivating receptivity are those that move us via affective intensity within pointedly mediated contexts. It is this constellation of strategies—this strange marriage of immersion and mediation—that enabled this performance to surface latent memory, affect and bias, unsettle entrenched patterns of thought and behaviour, and provide the conditions for revisability. This case makes clear that to lie beyond the domain of cognitive and verbal processes is not to lie beyond potential intervention, and offers insight to how such receptivity might be achieved in political processes more broadly.
extent to which receptive knowledge of collocations of EFL learners varies across word frequency bands. ..... language through in-class reading and conversation tasks which totalled 75 hours. The .... 4 Readers are referred to Wray (2002) and Barfield and Gyllstad (2009) for more details on how to teach collocations.
van Besouw, J.; van Dongen, J.; Maas, A.; Schatz, H.
This article reviews the early academic and public reception of Albert Einstein's theory of relativity in the Netherlands, particularly after Arthur Eddington's eclipse experiments of 1919. Initially, not much attention was given to relativity, as it did not seem an improvement over Hendrik A.
Full Text Available The present study investigates, (i English as Foreign Language (EFL learners’ receptive collocational knowledge growth in relation to their linguistic proficiency level; (ii how much receptive collocational knowledge is acquired as proficiency develops; and (iii the extent to which receptive knowledge of collocations of EFL learners varies across word frequency bands. A proficiency measure and a collocation test were administered to English majors at the University of Burundi. Results of the study suggest that receptive collocational competence develops alongside EFL learners’ linguistic proficiency; which lends empirical support to Gyllstad (2007, 2009 and Author (2011 among others, who reported similar findings. Furthermore, EFL learners’ collocations growth seems to be quantifiable wherein both linguistic proficiency level and word frequency occupy a crucial role. While more gains in terms of collocations that EFL learners could potentially add as a result of change in proficiency are found at lower levels of proficiency; collocations of words from more frequent word bands seem to be mastered first, and more gains are found at more frequent word bands. These results confirm earlier findings on the non-linearity nature of vocabulary growth (cf. Meara 1996 and the fundamental role played by frequency in word knowledge for vocabulary in general (Nation 1983, 1990, Nation and Beglar 2007, which are extended here to collocations knowledge.
Full Text Available This paper reviews the public reception of the Research Assessment Exercise 1996 (RAE from its announcement in December 1996 to the decline of discussion at end May 1997. A model for diffusion of the RAE is established which distinguishes extra-communal (or exoteric from intra-communal (or esoteric media. The different characteristics of each medium and the changing nature of the discussion over time are considered. Different themes are distinguished in the public reception of the RAE: the spatial distribution of research; the organisation of universities; disciplinary differences in understanding; a perceived conflict between research and teaching; the development of a culture of accountability; and analogies with the organisation of professional football. In conclusion, it is suggested that the RAE and its effects can be more fully considered from the perspective of scholarly communication and understandings of the development of knowledge than it has been by previous contributions in information science, which have concentrated on the possibility of more efficient implementation of existing processes. A fundamental responsibility for funding councils is also identified: to promote the overall health of university education and research, while establishing meaningful differentiations between units.
Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich
Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.
Oryadi Zanjani, Mohammad Majid; Hasanzadeh, Saeid; Rahgozar, Mehdi; Shemshadi, Hashem; Purdy, Suzanne C; Mahmudi Bakhtiari, Behrooz; Vahab, Maryam
Since the introduction of cochlear implantation, researchers have considered children's communication and educational success before and after implantation. Therefore, the present study aimed to compare auditory, speech, and language development scores following one-sided cochlear implantation between two groups of prelingual deaf children educated through either auditory-only (unisensory) or auditory-visual (bisensory) modes. A randomized controlled trial with a single-factor experimental design was used. The study was conducted in the Instruction and Rehabilitation Private Centre of Hearing Impaired Children and their Family, called Soroosh in Shiraz, Iran. We assessed 30 Persian deaf children for eligibility and 22 children qualified to enter the study. They were aged between 27 and 66 months old and had been implanted between the ages of 15 and 63 months. The sample of 22 children was randomly assigned to two groups: auditory-only mode and auditory-visual mode; 11 participants in each group were analyzed. In both groups, the development of auditory perception, receptive language, expressive language, speech, and speech intelligibility was assessed pre- and post-intervention by means of instruments which were validated and standardized in the Persian population. No significant differences were found between the two groups. The children with cochlear implants who had been instructed using either the auditory-only or auditory-visual modes acquired auditory, receptive language, expressive language, and speech skills at the same rate. Overall, spoken language significantly developed in both the unisensory group and the bisensory group. Thus, both the auditory-only mode and the auditory-visual mode were effective. Therefore, it is not essential to limit access to the visual modality and to rely solely on the auditory modality when instructing hearing, language, and speech in children with cochlear implants who are exposed to spoken language both at home and at school
interaural level and interaural envelope timing (weak cues for left-right direction). This work, published in Acustica united with Acta Acustica in...Acta Acust united Acustica 2005; 91:967-9. Durlach NI, Mason CR, Gallun FJ, Shinn-Cunningham BG, Colburn HS, and Kidd G Jr. Informational masking for
Yeatman, Jason D.; Ben-Shachar, Michal; Glover, Gary H.; Feldman, Heidi M.
The purpose of this study was to explore changes in activation of the cortical network that serves auditory sentence comprehension in children in response to increasing demands of complex sentences. A further goal is to study how individual differences in children's receptive language abilities are associated with such changes in cortical…
This study investigated the relationship between receptive and productive vocabulary size. The experimental design expanded upon earlier methodologies by using equivalent receptive and productive test formats with different receptive and productive target words to provide more accurate results. Translation tests were scored at two levels of…
Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....
Eric Olivier Boyer
Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.
Full Text Available Accurate auditory localization relies on neural computations based on spatial cues present in the sound waves at each ear. The values of these cues depend on the size, shape, and separation of the two ears and can therefore vary from one individual to another. As with other perceptual skills, the neural circuits involved in spatial hearing are shaped by experience during development and retain some capacity for plasticity in later life. However, the factors that enable and promote plasticity of auditory localization in the adult brain are unknown. Here we show that mature ferrets can rapidly relearn to localize sounds after having their spatial cues altered by reversibly occluding one ear, but only if they are trained to use these cues in a behaviorally relevant task, with greater and more rapid improvement occurring with more frequent training. We also found that auditory adaptation is possible in the absence of vision or error feedback. Finally, we show that this process involves a shift in sensitivity away from the abnormal auditory spatial cues to other cues that are less affected by the earplug. The mature auditory system is therefore capable of adapting to abnormal spatial information by reweighting different localization cues. These results suggest that training should facilitate acclimatization to hearing aids in the hearing impaired.
Tosca, Susana; Klastrup, Lisbeth
qualitative and quantitative studies. We argue that mapping user experience requires a sophisticated and holistic analytical approach - particularly, due to the popularity of social media platforms. We conclude the article by developing the concept of "networked reception" to characterize new kinds...... of transmedial world experience afforded by social media, which allow users to distribute and communicate not only the content of media texts but also their own experience and reception of transmedial world “texts”.......Building upon ten years of empirical work, this paper reflects on how to study increasingly complex user engagement with transmedial worlds. We examine our own analytical evolution from an initial aesthetic orientation to our current effort to incorporate the user´s own perspective through...
This paper draws on two reception studies. One focuses on an American medical drama which respondents perceived as entertainment but also as a reliable source of information from which they collected medical and social data by using emotional and ludic strategies. The second compares parallel illness narratives in a soap opera and a documentary. Soap operas were described by informants as good pedagogic tools because they attracted large audiences and promoted identification and repetition which enhance learning. On the other hand, they criticised the documentary for being incomplete and artificial. The conclusion argues that viewers are media-literate, astute and insightful. They produce sophisticated, subtle interpretations which cannot be predicted by content analyses of programmes alone. More reception research is therefore needed, particularly since television is increasingly omnipresent and provides a considerable portion of the public's medical knowledge.
Jasiewicz, Marcin; Powałka, Bartosz
This paper presents an issue of machining stability prediction of dynamic "lathe - workpiece" system evaluated using receptance coupling method. Dynamic properties of the lathe components (the spindle and the tailstock) are assumed to be constant and can be determined experimentally based on the results of the impact test. Hence, the variable of the system "machine tool - holder - workpiece" is the machined part, which can be easily modelled analytically. The method of receptance coupling enables a synthesis of experimental (spindle, tailstock) and analytical (machined part) models, so impact testing of the entire system becomes unnecessary. The paper presents methodology of analytical and experimental models synthesis, evaluation of the stability lobes and experimental validation procedure involving both the determination of the dynamic properties of the system and cutting tests. In the summary the experimental verification results would be presented and discussed.
by Walter Benjamin and Siegfried Kracauer, Berlin intellectuals from the interwar period, should be mentioned, too, along with Georges Perec and Michel de Certeau from Paris of the 1970s. They all are eminent representatives of a general intellectual concern for spatial matters – a concern that Michel......, the notion of aesthetics (taken in the original signification of aisthesis: sensory perception) helped to map the relations between city, human experience, and various forms of art and culture. Delving into our simultaneously optical and tactical reception of space (a dialectics pointed out by Walter...... Benjamin), studies in urbanity and aesthetics may highlight mul-tisensory everyday practices that pass unnoticed in the current era of visual domination. A humanistic approach to urban and spatial cultures should also learn from German sociologist and philosopher Georg Simmel’s hypothesis of a modern need...
Jensen, Frida Videbæk; Rozé, Caroline
This project is a reception analysis of how a target group receive online movie trailers. It utilized a focus group as means of research with participants from the dormitories Kollibrien and Korallen. We concluded that the group we investigated were not interested in online movie trailers as anything else than a preview of a movie. They preferred to experience movie trailers in the cinema. Their opinion of specific movie trailers were also determined by whether or not they identified with the...
This master’s degree thesis introduces the historical background of the Czech nation and the cultural contacts between Slovenes and Czechs. It outlines the development of the Czech young adult literature. In the thesis is qualitatively and quantitatively researched the reception of the Czech young adult literature. With a qualitative research it has been discovered how many young adult books have been translated from Czech into Slovene language, how many in different periods and which lit...
In my thesis I explore the potential of non-visual components of sculptural artworks. For that purpose I define reception and perception. I introduce senses and sculptural artworks of 20th century that address the specific sense. I examine reasons and consequences of favored treatment of vision and neglection of other senses, as well as the situation of people with blindness and visual impairement in today's visual culture. I committed my own artistic expression to create sculptural artwor...
Bjerre, Thomas Ærvold
The essay covers the critical reception of Mississippi-writer Lewis Nordan from his debut in 1983 to the boost in scholarly attention in the new millennium. The essay covers newspaper reviews but pays particular attention to the many academic essays that have placed Nordan as a writer...... in the southern literary tradition and have highlighted themes such as magical realism, the grotesque, race relations, music, and gender....
Loveall, Susan J; Channell, Marie Moore; Phillips, B Allyson; Abbeduto, Leonard; Conners, Frances A
The present study is an in-depth examination of receptive vocabulary in individuals with Down syndrome (DS) in comparison to control groups of individuals of similar nonverbal ability with typical development (TD) and non-specific etiology intellectual disability (ID). Verb knowledge was of particular interest, as it is known to be a predictor of later syntactic development. Fifty participants with DS, aged 10-21 years, 29 participants with ID, 10-21 years, and 29 participants with TD, 4-9 years, completed measures of receptive vocabulary (PPVT-4), nonverbal ability (Leiter-R), and phonological memory (Nonword Repetition subtest of the CTOPP). Groups were compared on percentage correct of noun, verb and attribute items on the PPVT-4. Results revealed that on verb items, the participants with ID performed significantly better than both participants with DS and TD, even when overall receptive vocabulary ability and phonological memory were held constant. Groups with DS and TD showed the same pattern of lexical knowledge, performing better on nouns than both verbs and attributes. In contrast, the group with ID performed similarly on nouns and verbs, but worse on attributes. Copyright © 2016 Elsevier Ltd. All rights reserved.
Mottershead, John E.; Kyprianou, Andreas; Ouyang, Huajiang
The inverse problem of assigning natural frequencies and antiresonances by a modification to the stiffness, mass and damping of a structure is addressed. Very simple modifications such as the addition of masses and grounded springs can be easily accommodated and require the measurement of translational receptances at the connection coordinates. Realistic modifications of practical usefulness, such as a modification by an added beam, require the measurement of rotational as well as translational receptances. Such data are difficult to obtain because of the practical problems of applying a pure moment. One method, the so-called 'T-block' approach, has received considerable attention in the literature, but the accompanying problem of ill-conditioning has not been fully addressed until now. The T-block is attached to the structure at the modification point, so that a force applied to the T-block generates a moment together with a force at the connection point between the T-block and the parent structure. Forces and linear displacements measured on the T-block, together with a mass and stiffness model of the T-block itself, allow the problem to be cast as a special case of excitation by multiple inputs. The resulting equations are generally ill-conditioned, but can be regularized by using a small number of independent measurements. The methodology and signal processing techniques required to estimate the rotational receptances are described. An experimental example is used to demonstrate the practical application of the method.
Mary-Anne Plaatjies van Huffel
Full Text Available This article attends to ecumenicity as the second reformation. The ecumenical organisations and agencies hugely influenced the theological praxis and reflection of the church during the past century. The First World Council of Churches (WCC Assembly in Amsterdam, the Netherlands, has been described as the most significant event in church history since the Reformation during the past decade. We saw the emergence of two initiatives that are going to influence ecumenical theology and practice in future, namely the Receptive Ecumenism and Catholic Learning research project, based in Durham, United Kingdom, and the International Theological Colloquium for Transformative Ecumenism of the WCC. Both initiatives constitute a fresh approach in methodology to ecumenical theology and practice. Attention will be given in this article to conciliar ecumenism, receptive ecumenism, transformative ecumenism and its implications for the development of an African transformative receptive ecumenism. In doing so, we should take cognisance of what Küng says about a confessionalist ghetto mentality: ‘We must avoid a confessionalistic ghetto mentality. Instead we should espouse an ecumenical vision that takes into consideration the world religions as well as contemporary ideologies: as much tolerance as possible toward those things outside the Church, toward the religious in general, and the human in general, and the development of that which is specifically Christian belong together!’
Fan, Bin; Kong, Qingqun; Trzcinski, Tomasz; Wang, Zhiheng; Pan, Chunhong; Fua, Pascal
Feature description for local image patch is widely used in computer vision. While the conventional way to design local descriptor is based on expert experience and knowledge, learning-based methods for designing local descriptor become more and more popular because of their good performance and data-driven property. This paper proposes a novel data-driven method for designing binary feature descriptor, which we call receptive fields descriptor (RFD). Technically, RFD is constructed by thresholding responses of a set of receptive fields, which are selected from a large number of candidates according to their distinctiveness and correlations in a greedy way. Using two different kinds of receptive fields (namely rectangular pooling area and Gaussian pooling area) for selection, we obtain two binary descriptors RFDR and RFDG .accordingly. Image matching experiments on the well-known patch data set and Oxford data set demonstrate that RFD significantly outperforms the state-of-the-art binary descriptors, and is comparable with the best float-valued descriptors at a fraction of processing time. Finally, experiments on object recognition tasks confirm that both RFDR and RFDG successfully bridge the performance gap between binary descriptors and their floating-point competitors.
Zhang, Lei; An, Xiao-Peng; Liu, Xiao-Rui; Fu, Ming-Zhe; Han, Peng; Peng, Jia-Yin; Hou, Jing-Xing; Zhou, Zhan-Qin; Cao, Bin-Yun; Song, Yu-Xuan
Endometrium receptivity is essential for successful embryo implantation in mammals. However, the lack of genetic information remains an obstacle to understanding the mechanisms underlying the development of a receptive endometrium from the pre-receptive phase in dairy goats. In this study, more than 4 billion high-quality reads were generated and de novo assembled into 102,441 unigenes; these unigenes were annotated using published databases. A total of 3,255 unigenes that were differentially expressed (DEGs) between the PE and RE were discovered in this study (P-values < 0.05). In addition, 76,729-77,102 putative SNPs and 12,837 SSRs were discovered in this study. Bioinformatics analysis of the DEGs revealed a number of biological processes and pathways that are potentially involved in the establishment of the RE, notably including the GO terms proteolysis, apoptosis, and cell adhesion and the KEGG pathways Cell cycle and extracellular matrix (ECM)-receptor interaction. We speculated that ADCY8, VCAN, SPOCK1, THBS1, and THBS2 may play important roles in the development of endometrial receptivity. The de novo assembly provided a good starting point and will serve as a valuable resource for further investigations into endometrium receptivity in dairy goats and future studies on the genomes of goats and other related mammals.
Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji
A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048
Christopher I Petkov
Full Text Available Anatomical studies propose that the primate auditory cortex contains more fields than have actually been functionally confirmed or described. Spatially resolved functional magnetic resonance imaging (fMRI with carefully designed acoustical stimulation could be ideally suited to extend our understanding of the processing within these fields. However, after numerous experiments in humans, many auditory fields remain poorly characterized. Imaging the macaque monkey is of particular interest as these species have a richer set of anatomical and neurophysiological data to clarify the source of the imaged activity. We functionally mapped the auditory cortex of behaving and of anesthetized macaque monkeys with high resolution fMRI. By optimizing our imaging and stimulation procedures, we obtained robust activity throughout auditory cortex using tonal and band-passed noise sounds. Then, by varying the frequency content of the sounds, spatially specific activity patterns were observed over this region. As a result, the activity patterns could be assigned to many auditory cortical fields, including those whose functional properties were previously undescribed. The results provide an extensive functional tessellation of the macaque auditory cortex and suggest that 11 fields contain neurons tuned for the frequency of sounds. This study provides functional support for a model where three fields in primary auditory cortex are surrounded by eight neighboring "belt" fields in non-primary auditory cortex. The findings can now guide neurophysiological recordings in the monkey to expand our understanding of the processing within these fields. Additionally, this work will improve fMRI investigations of the human auditory cortex.
Buchholz, Jörg; Favrot, Sylvain Emmanuel
. This system provides a flexible research platform for conducting auditory experiments with normal-hearing, hearing-impaired, and aided hearing-impaired listeners in a fully controlled and realistic environment. This includes measures of basic auditory function (e.g., signal detection, distance perception......) and measures of speech intelligibility. A battery of objective tests (e.g., reverberation time, clarity, interaural correlation coefficient) and subjective tests (e.g., speech reception thresholds) is presented that demonstrates the applicability of the LoRA system....
Hall, M.; Smeele, P.M.T.; Kuhl, P.K.
The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual
Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee
... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...
Bennequin, D; Berthoz, A
We present a set of formulas for the receptive fields of the vestibular neurons that are motivated by Galilean invariance. We show that these formulas explain non-trivial data in neurophysiology, and suggest new hypothesis to be tested in dynamical 3D conditions. Moreover our model offers a way for neuronal computing with 3D displacements, which is reputed to be hard, underlying the vestibular reflexes. This computation is presented in a Bayesian framework. The basis of the model is the necessity of living bodies to work invariantly in space-time, allied to the necessary discreteness of neuronal transmission.
Slabu, Lavinia Mihaela
Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system
Full Text Available Receptive fields of sensory neurons are considered to be dynamic and depend on the stimulus history. In the auditory system, evidence of dynamic frequency-receptive fields has been found following stimulus-specific adaptation (SSA. However, the underlying mechanism and circuitry of SSA have not been fully elucidated. Here, we studied how frequency-receptive fields of neurons in rat inferior colliculus (IC changed when exposed to a biased tone sequence. Pure tone with one specific frequency (adaptor was presented markedly more often than others. The adapted tuning was compared with the original tuning measured with an unbiased sequence. We found inhomogeneous changes in frequency tuning in IC, exhibiting a center-surround pattern with respect to the neuron’s best frequency. Central adaptors elicited strong suppressive and repulsive changes while flank adaptors induced facilitative and attractive changes. Moreover, we proposed a two-layer model of the underlying network, which not only reproduced the adaptive changes in the receptive fields but also predicted novelty responses to oddball sequences. These results suggest that frequency-specific adaptation in auditory midbrain can be accounted for by an adapted frequency channel and its lateral spreading of adaptation, which shed light on the organization of the underlying circuitry.
Loo, Jenny Hooi Yin; Rosen, Stuart; Bamiou, Doris-Eva
Children with auditory processing disorder (APD) typically present with "listening difficulties,"' including problems understanding speech in noisy environments. The authors examined, in a group of such children, whether a 12-week computer-based auditory training program with speech material improved the perception of speech-in-noise test performance, and functional listening skills as assessed by parental and teacher listening and communication questionnaires. The authors hypothesized that after the intervention, (1) trained children would show greater improvements in speech-in-noise perception than untrained controls; (2) this improvement would correlate with improvements in observer-rated behaviors; and (3) the improvement would be maintained for at least 3 months after the end of training. This was a prospective randomized controlled trial of 39 children with normal nonverbal intelligence, ages 7 to 11 years, all diagnosed with APD. This diagnosis required a normal pure-tone audiogram and deficits in at least two clinical auditory processing tests. The APD children were randomly assigned to (1) a control group that received only the current standard treatment for children diagnosed with APD, employing various listening/educational strategies at school (N = 19); or (2) an intervention group that undertook a 3-month 5-day/week computer-based auditory training program at home, consisting of a wide variety of speech-based listening tasks with competing sounds, in addition to the current standard treatment. All 39 children were assessed for language and cognitive skills at baseline and on three outcome measures at baseline and immediate postintervention. Outcome measures were repeated 3 months postintervention in the intervention group only, to assess the sustainability of treatment effects. The outcome measures were (1) the mean speech reception threshold obtained from the four subtests of the listening in specialized noise test that assesses sentence perception in
in a wind-tunnel aerofoil rig. The aerofoil and its suspension were designed as part of the project. The advantage of the receptance method over...binary flutter in a wind-tunnel aerofoil rig. The aerofoil and its suspension were designed as part of the project. The advantage of the receptance...and determination of control gains. This report describes the theory of the method of receptances and its implementation on a wind- tunnel aerofoil
Full Text Available Adults integrate multisensory information optimally (e.g. Ernst & Banks, 2002 while children are not able to integrate multisensory visual haptic cues until 8-10 years of age (e.g. Gori, Del Viva, Sandini, & Burr, 2008. Before that age strong unisensory dominance is present for size and orientation visual-haptic judgments maybe reflecting a process of cross-sensory calibration between modalities. It is widely recognized that audition dominates time perception, while vision dominates space perception. If the cross sensory calibration process is necessary for development, then the auditory modality should calibrate vision in a bimodal temporal task, and the visual modality should calibrate audition in a bimodal spatial task. Here we measured visual-auditory integration in both the temporal and the spatial domains reproducing for the spatial task a child-friendly version of the ventriloquist stimuli used by Alais and Burr (2004 and for the temporal task a child-friendly version of the stimulus used by Burr, Banks and Morrone (2009. Unimodal and bimodal (conflictual or not conflictual audio-visual thresholds and PSEs were measured and compared with the Bayesian predictions. In the temporal domain, we found that both in children and adults, audition dominates the bimodal visuo-auditory task both in perceived time and precision thresholds. Contrarily, in the visual-auditory spatial task, children younger than 12 years of age show clear visual dominance (on PSEs and bimodal thresholds higher than the Bayesian prediction. Only in the adult group bimodal thresholds become optimal. In agreement with previous studies, our results suggest that also visual-auditory adult-like behaviour develops late. Interestingly, the visual dominance for space and the auditory dominance for time that we found might suggest a cross-sensory comparison of vision in a spatial visuo-audio task and a cross-sensory comparison of audition in a temporal visuo-audio task.
Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L
Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).
Kaya, Emine Merve; Elhilali, Mounya
Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Mann, Philip H.; Suiter, Patricia A.
This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…
Higgins, Nathan C.; Storace, Douglas A.; Escabí, Monty A.
Accurate orientation to sound under challenging conditions requires auditory cortex, but it is unclear how spatial attributes of the auditory scene are represented at this level. Current organization schemes follow a functional division whereby dorsal and ventral auditory cortices specialize to encode spatial and object features of sound source, respectively. However, few studies have examined spatial cue sensitivities in ventral cortices to support or reject such schemes. Here Fourier optical imaging was used to quantify best frequency responses and corresponding gradient organization in primary (A1), anterior, posterior, ventral (VAF), and suprarhinal (SRAF) auditory fields of the rat. Spike rate sensitivities to binaural interaural level difference (ILD) and average binaural level cues were probed in A1 and two ventral cortices, VAF and SRAF. Continuous distributions of best ILDs and ILD tuning metrics were observed in all cortices, suggesting this horizontal position cue is well covered. VAF and caudal SRAF in the right cerebral hemisphere responded maximally to midline horizontal position cues, whereas A1 and rostral SRAF responded maximally to ILD cues favoring more eccentric positions in the contralateral sound hemifield. SRAF had the highest incidence of binaural facilitation for ILD cues corresponding to midline positions, supporting current theories that auditory cortices have specialized and hierarchical functional organization. PMID:20980610
Suresh, Vandana; Çiftçioğlu, Ulaş M; Wang, Xin; Lala, Brittany M; Ding, Kimberly R; Smith, William A; Sommer, Friedrich T; Hirsch, Judith A
Comparative physiological and anatomical studies have greatly advanced our understanding of sensory systems. Many lines of evidence show that the murine lateral geniculate nucleus (LGN) has unique attributes, compared with other species such as cat and monkey. For example, in rodent, thalamic receptive field structure is markedly diverse, and many cells are sensitive to stimulus orientation and direction. To explore shared and different strategies of synaptic integration across species, we made whole-cell recordings in vivo from the murine LGN during the presentation of visual stimuli, analyzed the results with different computational approaches, and compared our findings with those from cat. As for carnivores, murine cells with classical center-surround receptive fields had a "push-pull" structure of excitation and inhibition within a given On or Off subregion. These cells compose the largest single population in the murine LGN (∼40%), indicating that push-pull is key in the form vision pathway across species. For two cell types with overlapping On and Off responses, which recalled either W3 or suppressed-by-contrast ganglion cells in murine retina, inhibition took a different form and was most pronounced for spatially extensive stimuli. Other On-Off cells were selective for stimulus orientation and direction. In these cases, retinal inputs were tuned and, for oriented cells, the second-order subunit of the receptive field predicted the preferred angle. By contrast, suppression was not tuned and appeared to sharpen stimulus selectivity. Together, our results provide new perspectives on the role of excitation and inhibition in retinothalamic processing. We explored the murine lateral geniculate nucleus from a comparative physiological perspective. In cat, most retinal cells have center-surround receptive fields and push-pull excitation and inhibition, including neurons with the smallest (highest acuity) receptive fields. The same is true for thalamic relay cells
Full Text Available Information received from different sensory modalities profoundly influences human perception. For example, changes in the auditory flutter rate induce changes in the apparent flicker rate of a flashing light (Shipley, 1964. In the present study, we investigated whether auditory information would affect the perceived offset position of a moving object. In Experiment 1, a visual object moved toward the center of the computer screen and disappeared abruptly. A transient auditory signal was presented at different times relative to the moment when the object disappeared. The results showed that if the auditory signal was presented before the abrupt offset of the moving object, the perceived final position was shifted backward, implying that the perceived offset position was affected by the transient auditory information. In Experiment 2, we presented the transient auditory signal to either the left or the right ear. The results showed that the perceived offset shifted backward more strongly when the auditory signal was presented to the same side from which the moving object originated. In Experiment 3, we found that the perceived timing of the visual offset was not affected by the spatial relation between the auditory signal and the visual offset. The present results are interpreted as indicating that an auditory signal may influence the offset position of a moving object through both spatial and temporal processes.
Rohrer, Jonathan D.; Sauter, Disa; Scott, Sophie; Rossor, Martin N.; Warren, Jason D.
Introduction Prosody has been little studied in the primary progressive aphasias (PPAs), a group of neurodegenerative disorders presenting with progressive language impairment. Methods Here we conducted a systematic investigation of different dimensions of prosody processing (acoustic, linguistic and emotional) in a cohort of 19 patients with nonfluent PPA syndromes (11 with progressive nonfluent aphasia, PNFA; five with progressive logopenic/phonological aphasia, LPA; three with progranulin-associated aphasia, GRN-PPA) compared with a group of healthy older controls. Voxel-based morphometry (VBM) was used to identify neuroanatomical associations of prosodic functions. Results Broadly comparable receptive prosodic deficits were exhibited by the PNFA, LPA and GRN-PPA subgroups, for acoustic, linguistic and affective dimensions of prosodic analysis. Discrimination of prosodic contours was significantly more impaired than discrimination of simple acoustic cues, and discrimination of intonation was significantly more impaired than discrimination of stress at phrasal level. Recognition of vocal emotions was more impaired than recognition of facial expressions for the PPA cohort, and recognition of certain emotions (in particular, disgust and fear) was relatively more impaired than others (sadness, surprise). VBM revealed atrophy associated with acoustic and linguistic prosody impairments in a distributed cortical network including areas likely to be involved in perceptual analysis of vocalisations (posterior temporal and inferior parietal cortices) and working memory (fronto-parietal circuitry). Grey matter associations of emotional prosody processing were identified for negative emotions (disgust, fear, sadness) in a broadly overlapping network of frontal, temporal, limbic and parietal areas. Conclusions Taken together, the findings show that receptive prosody is impaired in nonfluent PPA syndromes, and suggest a generic early perceptual deficit of prosodic signal
Pickles, James O
This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.
Milner, Rafał; Rusiniak, Mateusz; Wolak, Tomasz; Piatkowska-Janko, Ewa; Naumczyk, Patrycja; Bogorodzki, Piotr; Senderski, Andrzej; Ganc, Małgorzata; Skarzyński, Henryk
Processing of auditory information in central nervous system bases on the series of quickly occurring neural processes that cannot be separately monitored using only the fMRI registration. Simultaneous recording of the auditory evoked potentials, characterized by good temporal resolution, and the functional magnetic resonance imaging with excellent spatial resolution allows studying higher auditory functions with precision both in time and space. was to implement the simultaneous AEP-fMRI recordings method for the investigation of information processing at different levels of central auditory system. Five healthy volunteers, aged 22-35 years, participated in the experiment. The study was performed using high-field (3T) MR scanner from Siemens and 64-channel electrophysiological system Neuroscan from Compumedics. Auditory evoked potentials generated by acoustic stimuli (standard and deviant tones) were registered using modified odd-ball procedure. Functional magnetic resonance recordings were performed using sparse acquisition paradigm. The results of electrophysiological registrations have been worked out by determining voltage distributions of AEP on skull and modeling their bioelectrical intracerebral generators (dipoles). FMRI activations were determined on the basis of deviant to standard and standard to deviant functional contrasts. Results obtained from electrophysiological studies have been integrated with functional outcomes. Morphology, amplitude, latency and voltage distribution of auditory evoked potentials (P1, N1, P2) to standard stimuli presented during simultaneous AEP-fMRI registrations were very similar to the responses obtained outside scanner room. Significant fMRI activations to standard stimuli were found mainly in the auditory cortex. Activations in these regions corresponded with N1 wave dipoles modeled based on auditory potentials generated by standard tones. Auditory evoked potentials to deviant stimuli were recorded only outside the MRI
Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.
The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671
Miconi, Thomas; VanRullen, Rufin
Visual attention has many effects on neural responses, producing complex changes in firing rates, as well as modifying the structure and size of receptive fields, both in topological and feature space. Several existing models of attention suggest that these effects arise from selective modulation of neural inputs. However, anatomical and physiological observations suggest that attentional modulation targets higher levels of the visual system (such as V4 or MT) rather than input areas (such as V1). Here we propose a simple mechanism that explains how a top-down attentional modulation, falling on higher visual areas, can produce the observed effects of attention on neural responses. Our model requires only the existence of modulatory feedback connections between areas, and short-range lateral inhibition within each area. Feedback connections redistribute the top-down modulation to lower areas, which in turn alters the inputs of other higher-area cells, including those that did not receive the initial modulation. This produces firing rate modulations and receptive field shifts. Simultaneously, short-range lateral inhibition between neighboring cells produce competitive effects that are automatically scaled to receptive field size in any given area. Our model reproduces the observed attentional effects on response rates (response gain, input gain, biased competition automatically scaled to receptive field size) and receptive field structure (shifts and resizing of receptive fields both spatially and in complex feature space), without modifying model parameters. Our model also makes the novel prediction that attentional effects on response curves should shift from response gain to contrast gain as the spatial focus of attention drifts away from the studied cell.
Buyl, Aafke; Housen, Alex
This study takes a new look at the topic of developmental stages in the second language (L2) acquisition of morphosyntax by analysing receptive learner data, a language mode that has hitherto received very little attention within this strand of research (for a recent and rare study, see Spinner, 2013). Looking at both the receptive and productive…
The present study explores academic vocabulary knowledge, operationalised through the Academic Word List, among first-year higher education students. Both receptive and productive knowledge and the proportion between the two are examined. Results show that while receptive knowledge is readily acquired by ...
To provide theoretical basis for artificial pollination in Lagerstroemia indica L., pollen viability and stigma receptivity were tested and the morphological change of stigma was observed. Pollen viability tested by in vitro culture, stigma receptivity examined by benzidine-H2O2 testing and fruit set estimated by field artificial ...
Glick, Thomas F.
The subfield of Darwin studies devoted to comparative reception coalesced around 1971 with the planning of a conference on the subject, at the University of Texas at Austin held in April 1972. The original focus was western Europe, Russia and the United States. Subsequently a spate of studies on the Italian reception added to the Eurocentric…
Academic Word List (AWL)) and students from the same class, who are thus of comparable linguistic ... Language (EFL) students, which it complements by making estimates of the ratio between receptive and ... students' receptive vocabulary size provides teachers with a gauge as to whether those students will be able to ...
Nov 13, 2013 ... To provide theoretical basis for artificial pollination in Lagerstroemia indica L., pollen viability and stigma receptivity were tested and the morphological change of stigma was observed. Pollen viability tested by in vitro culture, stigma receptivity examined by benzidine-H2O2 testing and fruit set estimated.
The article examines two key concepts in research on policy borrowing and lending that are often used to explain why and how educational reforms travel across national boundaries: reception and translation. The studies on reception analyse the political, economic, and cultural reasons that account for the attractiveness of a reform from elsewhere.…
Lee, John Chi-Kin; Yin, Hong-Biao; Zhang, Zhong-Hua; Jin, Yu-Le
This study explores the relationships between teacher empowerment, teacher receptivity toward, and perceived outcomes of, a system-wide curriculum change, particularly national curriculum reform in basic education in China. The results of a survey of 1,646 teachers from six provinces indicate that teachers were positive in their receptivity and…
Skoe, Erika; Kraus, Nina
Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...
Ocklenburg, Sebastian; Hirnstein, Marco; Hausmann, Markus; Lewald, Jorg
Several studies have shown that handedness has an impact on visual spatial abilities. Here we investigated the effect of laterality on auditory space perception. Participants (33 right-handers, 20 left-handers) completed two tasks of sound localization. In a dark, anechoic, and sound-proof room, sound stimuli (broadband noise) were presented via…
Kizkin, Sibel; Karlidag, Rifat; Ozcan, Cemal; Ozisik, Handan Isin
Evoked potential studies have demonstrated that musicians have the ability to distinguish musical sounds preattentively and automatically at the temporal, spectral, and spatial levels in more detail. It is however not known whether there is a difference in the early processes of auditory data processing of musicians. The most emphasized and…
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.
Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis
Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261
Weis, Tina; Brechmann, André; Puschmann, Sebastian; Thiel, Christiane M
Associative learning studies have shown that the anticipation of reward and punishment shapes the representation of sensory stimuli, which is further modulated by dopamine. Less is known about whether and how reward delivery activates sensory cortices and the role of dopamine at that time point of learning. We used an appetitive instrumental learning task in which participants had to learn that a specific class of frequency-modulated tones predicted a monetary reward following fast and correct responses in a succeeding reaction time task. These fMRI data were previously analyzed regarding the effect of reward anticipation, but here we focused on neural activity to the reward outcome relative to the reward expectation and tested whether such activation in the reward reception phase is modulated by L-DOPA. We analyzed neural responses at the time point of reward outcome under three different conditions: 1) when a reward was expected and received, 2) when a reward was expected but not received, and 3) when a reward was not expected and not received. Neural activity in auditory cortex was enhanced during feedback delivery either when an expected reward was received or when the expectation of obtaining no reward was correct. This differential neural activity in auditory cortex was only seen in subjects who learned the reward association and not under dopaminergic modulation. Our data provide evidence that auditory cortices are active at the time point of reward outcome. However, responses are not dependent on the reward itself but on whether the outcome confirmed the subject's expectations.
Full Text Available The brain is able to maintain a stable perception although the visual stimuli vary substantially on the retina due to geometric transformations and lighting variations in the environment. This paper presents a theory for achieving basic invariance properties already at the level of receptive fields. Specifically, the presented framework comprises (i local scaling transformations caused by objects of different size and at different distances to the observer, (ii locally linearized image deformations caused by variations in the viewing direction in relation to the object, (iii locally linearized relative motions between the object and the observer and (iv local multiplicative intensity transformations caused by illumination variations. The receptive field model can be derived by necessity from symmetry properties of the environment and leads to predictions about receptive field profiles in good agreement with receptive field profiles measured by cell recordings in mammalian vision. Indeed, the receptive field profiles in the retina, LGN and V1 are close to ideal to what is motivated by the idealized requirements. By complementing receptive field measurements with selection mechanisms over the parameters in the receptive field families, it is shown how true invariance of receptive field responses can be obtained under scaling transformations, affine transformations and Galilean transformations. Thereby, the framework provides a mathematically well-founded and biologically plausible model for how basic invariance properties can be achieved already at the level of receptive fields and support invariant recognition of objects and events under variations in viewpoint, retinal size, object motion and illumination. The theory can explain the different shapes of receptive field profiles found in biological vision, which are tuned to different sizes and orientations in the image domain as well as to different image velocities in space-time, from a
Joshua G. W. Bernstein
Full Text Available The audiogram predicts <30% of the variance in speech-reception thresholds (SRTs for hearing-impaired (HI listeners fitted with individualized frequency-dependent gain. The remaining variance could reflect suprathreshold distortion in the auditory pathways or nonauditory factors such as cognitive processing. The relationship between a measure of suprathreshold auditory function—spectrotemporal modulation (STM sensitivity—and SRTs in noise was examined for 154 HI listeners fitted with individualized frequency-specific gain. SRTs were measured for 65-dB SPL sentences presented in speech-weighted noise or four-talker babble to an individually programmed master hearing aid, with the output of an ear-simulating coupler played through insert earphones. Modulation-depth detection thresholds were measured over headphones for STM (2cycles/octave density, 4-Hz rate applied to an 85-dB SPL, 2-kHz lowpass-filtered pink-noise carrier. SRTs were correlated with both the high-frequency (2–6 kHz pure-tone average (HFA; R2 = .31 and STM sensitivity (R2 = .28. Combined with the HFA, STM sensitivity significantly improved the SRT prediction (ΔR2 = .13; total R2 = .44. The remaining unaccounted variance might be attributable to variability in cognitive function and other dimensions of suprathreshold distortion. STM sensitivity was most critical in predicting SRTs for listeners < 65 years old or with HFA <53 dB HL. Results are discussed in the context of previous work suggesting that STM sensitivity for low rates and low-frequency carriers is impaired by a reduced ability to use temporal fine-structure information to detect dynamic spectra. STM detection is a fast test of suprathreshold auditory function for frequencies <2 kHz that complements the HFA to predict variability in hearing-aid outcomes for speech perception in noise.
Reijneveld, S.A.; de Boer, J.B.; Bean, T.; Korfker, D.G.
We assessed the effects of a stringent reception policy on the mental health of unaccompanied adolescent asylum seekers by comparing the mental health of adolescents in a restricted campus reception setting and in a setting offering more autonomy (numbers [response rates]: 69 [93%] and 53 [69%],
Full Text Available Self-perception of body posture and movement is achieved through multi-sensory integration, particularly the utilisation of vision, and proprioceptive information derived from muscles and joints. Disruption to these processes can occur following a neurological accident, such as stroke, leading to sensory and physical impairment. Rehabilitation can be helped through use of augmented visual and auditory biofeedback to stimulate neuro-plasticity, but the effective design and application of feedback, particularly in the auditory domain, is non-trivial. Simple auditory feedback was tested by comparing the stepping accuracy of normal subjects when given a visual spatial target (step length and an auditory temporal target (step duration. A baseline measurement of step length and duration was taken using optical motion capture. Subjects (n=20 took 20 ‘training’ steps (baseline ±25% using either an auditory target (950 Hz tone, bell-shaped gain envelope or visual target (spot marked on the floor and were then asked to replicate the target step (length or duration corresponding to training with all feedback removed. Visual cues (mean percentage error=11.5%; SD ± 7.0%; auditory cues (mean percentage error = 12.9%; SD ± 11.8%. Visual cues elicit a high degree of accuracy both in training and follow-up un-cued tasks; despite the novelty of the auditory cues present for subjects, the mean accuracy of subjects approached that for visual cues, and initial results suggest a limited amount of practice using auditory cues can improve performance.
Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant
Chiaradia, Enrico Antonio; Weber, Enrico; Masseroni, Daniele; Battista Bischetti, Gian; Gandolfi, Claudio
Stormwaters are the main cause of urban floods in many urbanized areas. Historically, stormwater management practices have been focused on building infrastructures that achieve runoff attenuation through the storage of water volumes in large detention basins. However, this approach has proven to be insufficient to resolve the problem as well as it is difficult to implement in areas with a dense urban fabric. Nowadays, around the world, water managers are increasingly embracing "soft path" approaches, that aim to manage the excess of urban runoff through Green Infrastructures, where detention capacities are provided by the retention proprieties of soil and vegetation elements. Along the line of these new sustainable stormwater management practices, the aim of this study is to promote a further paradigm-shift with respect to the traditional practices i.e. to investigate the possibility to use the already existing green infrastructures of the peri-urban rural areas as reception element of the surplus of urban runoff. Many territories in Northern Italy, for example. are characterized by a high density of irrigation canals and agricultural fields that, in some cases, are isolated or pent-up inside urbanized areas. Both these elements may represent storage volumes for accumulating stormwater from urban areas. In this work, we implemented a holistic framework, based on Self Organized Map technique (SOM), with the objective to produce a spatial map of the stormwater reception level that can be provided by the rural environment. We elaborated physiographic characteristics of irrigation canals and agricultural fields through the SOM algorithm obtaining as output a series of cluster groups with the same level of receptivity. This procedure was applied on an area of 1933 km2 around the city of Milan and a map of 250x250m resolution was obtained with three different levels of stormwater reception capacity. About 50% of rural environment has a good level of reception and only 30
Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony
It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…
Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.
Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular
The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.
Mokhemar, Mary Ann
This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…
Pierce, John P; Sargent, James D; White, Martha M; Borek, Nicolette; Portnoy, David B; Green, Victoria R; Kaufman, Annette R; Stanton, Cassandra A; Bansal-Travers, Maansi; Strong, David R; Pearson, Jennifer L; Coleman, Blair N; Leas, Eric; Noble, Madison L; Trinidad, Dennis R; Moran, Meghan B; Carusi, Charles; Hyland, Andrew; Messer, Karen
Non-cigarette tobacco marketing is less regulated and may promote cigarette smoking among adolescents. We quantified receptivity to advertising for multiple tobacco products and hypothesized associations with susceptibility to cigarette smoking. Wave 1 of the nationally representative PATH (Population Assessment of Tobacco and Health) study interviewed 10 751 adolescents who had never used tobacco. A stratified random selection of 5 advertisements for each of cigarettes, e-cigarettes, smokeless products, and cigars were shown from 959 recent tobacco advertisements. Aided recall was classified as low receptivity, and image-liking or favorite ad as higher receptivity. The main dependent variable was susceptibility to cigarette smoking. Among US youth, 41% of 12 to 13 year olds and half of older adolescents were receptive to at least 1 tobacco advertisement. Across each age group, receptivity to advertising was highest for e-cigarettes (28%-33%) followed by cigarettes (22%-25%), smokeless tobacco (15%-21%), and cigars (8%-13%). E-cigarette ads shown on television had the highest recall. Among cigarette-susceptible adolescents, receptivity to e-cigarette advertising (39.7%; 95% confidence interval [CI]: 37.9%-41.6%) was higher than for cigarette advertising (31.7%; 95% CI: 29.9%-33.6%). Receptivity to advertising for each tobacco product was associated with increased susceptibility to cigarette smoking, with no significant difference across products (similar odds for both cigarette and e-cigarette advertising; adjusted odds ratio = 1.22; 95% CI: 1.09-1.37). A large proportion of US adolescent never tobacco users are receptive to tobacco advertising, with television advertising for e-cigarettes having the highest recall. Receptivity to advertising for each non-cigarette tobacco product was associated with susceptibility to smoke cigarettes. Copyright © 2017 by the American Academy of Pediatrics.
J Gordon Millichap
Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.
Kazak Berument, Sibel; Güven, Ayşe Gül
A reliable, valid and original test to assess the receptive vocabulary skills of children in Turkey was not available. Thus, the purpose of the current study was to develop a receptive vocabulary test for Turkish children based on the Turkish language. For the Receptive Vocabulary Sub-Scale (TIFALDI-RT) 242 concrete and abstract words were chosen from word frequency lists and a comprehensive Turkish Dictionary. Pilot data were collected from 648 children aged 2 to 13 from Ankara, and norm data were collected from a nationally representative sample of 3755 children. Item analysis (item difficulty, discrimination and distractor) was carried out on the pilot data and based on the results, the total item number was reduced to 157. Further, three parameter item analyses (IRT) were carried out on the norm data by using BILOG-MG (SSI, 2002), and the results indicated that the TIFALDI Receptive Vocabulary Sub-Scale could be reduced to 104 items to assess 2 to 12 year-old children's receptive vocabulary. Test-retest and internal consistency reliabilities were calculated for the whole sample and age groups separately, and all the coefficients were high. For the validity, the relationship between the WISC-R and Ankara Developmental Screening Inventory (AGTE) and Receptive Vocabulary Sub-Scale were investigated. Once again, the TIFALDI Receptive Vocabulary Sub-Scale scores were found to be significantly related to WISC-R and AGTE scores. The TIFALDI Receptive Vocabulary Sub-Scale was developed on the basis of the Turkish Language and norm data were collected from a nationally representative sample. The TIFALDI-RT also had a high reliability and validity. Thus, the TIFALDI-RT can be used to assess 2 to 12 year-old children's receptive vocabulary skills.
The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029
Bernstein, Joshua G W; Summers, Van; Grassi, Elena; Grant, Ken W
Hearing-impaired (HI) individuals with similar ages and audiograms often demonstrate substantial differences in speech-reception performance in noise. Traditional models of speech intelligibility focus primarily on average performance for a given audiogram, failing to account for differences between listeners with similar audiograms. Improved prediction accuracy might be achieved by simulating differences in the distortion that speech may undergo when processed through an impaired ear. Although some attempts to model particular suprathreshold distortions can explain general speech-reception deficits not accounted for by audibility limitations, little has been done to model suprathreshold distortion and predict speech-reception performance for individual HI listeners. Auditory-processing models incorporating individualized measures of auditory distortion, along with audiometric thresholds, could provide a more complete understanding of speech-reception deficits by HI individuals. A computational model capable of predicting individual differences in speech-recognition performance would be a valuable tool in the development and evaluation of hearing-aid signal-processing algorithms for enhancing speech intelligibility. This study investigated whether biologically inspired models simulating peripheral auditory processing for individual HI listeners produce more accurate predictions of speech-recognition performance than audiogram-based models. Psychophysical data on spectral and temporal acuity were incorporated into individualized auditory-processing models consisting of three stages: a peripheral stage, customized to reflect individual audiograms and spectral and temporal acuity; a cortical stage, which extracts spectral and temporal modulations relevant to speech; and an evaluation stage, which predicts speech-recognition performance by comparing the modulation content of clean and noisy speech. To investigate the impact of different aspects of peripheral processing
Hall, J; Hubbard, A; Neely, S; Tubis, A
How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft . Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...
This paper discusses the depiction of engravings taken from Vesalius's, Valverde de Hamusco's and Casserio 's treatises in portraits during the 16th and the 17th centuries to understand better the reception of the Fabrica in Spain and England.
National Aeronautics and Space Administration — Invocon proposes the Surface-borne Time-Of-Reception Measurements (STORM) system as a method to locate the position of lightning strikes on aerospace vehicles....
Henriksen, Lisa; Feighery, Ellen C.; Schleicher, Nina C.; Fortmann, Stephen P.
Purpose This longitudinal study examined the influence of alcohol advertising and promotions on the initiation of alcohol use. A measure of receptivity to alcohol marketing was developed from research about tobacco marketing. Recall and recognition of alcohol brand names were also examined. Methods Data were obtained from in-class surveys of 6th, 7th, and 8th graders at baseline and 12-month follow-up. Participants who were classified as never drinkers at baseline (n=1,080) comprised the analysis sample. Logistic regression models examined the association of advertising receptivity at baseline with any alcohol use and current drinking at follow-up, adjusting for multiple risk factors, including peer alcohol use, school performance, risk taking, and demographics. Results At baseline, 29% of never drinkers either owned or wanted to use an alcohol branded promotional item (high receptivity), 12% students named the brand of their favorite alcohol ad (moderate receptivity) and 59% were not receptive to alcohol marketing. Approximately 29% of adolescents reported any alcohol use at follow-up; 13% reported drinking at least 1 or 2 days in the past month. Never drinkers who reported high receptivity to alcohol marketing at baseline were 77% more likely to initiate drinking by follow-up than those were not receptive. Smaller increases in the odds of alcohol use at follow-up were associated with better recall and recognition of alcohol brand names at baseline. Conclusions Alcohol advertising and promotions are associated with the uptake of drinking. Prevention programs may reduce adolescents’ receptivity to alcohol marketing by limiting their exposure to alcohol ads and promotions and by increasing their skepticism about the sponsors’ marketing tactics. PMID:18155027
Joshua R Gold
Full Text Available The brain displays a remarkable capacity for both widespread and region-specific modifications in response to environmental challenges, with adaptive processes bringing about the reweighting of connections in neural networks putatively required for optimising performance and behaviour. As an avenue for investigation, studies centred around changes in the mammalian auditory system, extending from the brainstem to the cortex, have revealed a plethora of mechanisms that operate in the context of sensory disruption after insult, be it lesion-, noise trauma, drug-, or age-related. Of particular interest in recent work are those aspects of auditory processing which, after sensory disruption, change at multiple – if not all – levels of the auditory hierarchy. These include changes in excitatory, inhibitory and neuromodulatory networks, consistent with theories of homeostatic plasticity; functional alterations in gene expression and in protein levels; as well as broader network processing effects with cognitive and behavioural implications. Nevertheless, there abounds substantial debate regarding which of these processes may only be sequelae of the original insult, and which may, in fact, be maladaptively compelling further degradation of the organism’s competence to cope with its disrupted sensory context. In this review, we aim to examine how the mammalian auditory system responds in the wake of particular insults, and to disambiguate how the changes that develop might underlie a correlated class of phantom disorders, including tinnitus and hyperacusis, which putatively are brought about through maladaptive neuroplastic disruptions to auditory networks governing the spatial and temporal processing of acoustic sensory information.
Full Text Available In many natural audiovisual events (e.g., a clap of the two hands, the visual signal precedes the sound and thus allows observers to predict when, where, and which sound will occur. Previous studies have already reported that there are distinct neural correlates of temporal (when versus phonetic/semantic (which content on audiovisual integration. Here we examined the effect of visual prediction of auditory location (where in audiovisual biological motion stimuli by varying the spatial congruency between the auditory and visual part of the audiovisual stimulus. Visual stimuli were presented centrally, whereas auditory stimuli were presented either centrally or at 90° azimuth. Typical subadditive amplitude reductions (AV – V < A were found for the auditory N1 and P2 for spatially congruent and incongruent conditions. The new finding is that the N1 suppression was larger for spatially congruent stimuli. A very early audiovisual interaction was also found at 30-50 ms in the spatially congruent condition, while no effect of congruency was found on the suppression of the P2. This indicates that visual prediction of auditory location can be coded very early in auditory processing.
King, Andrew J.; Parsons, Carl H.; Moore, David R.
Sound localization relies on the neural processing of monaural and binaural spatial cues that arise from the way sounds interact with the head and external ears. Neurophysiological studies of animals raised with abnormal sensory inputs show that the map of auditory space in the superior colliculus is shaped during development by both auditory and visual experience. An example of this plasticity is provided by monaural occlusion during infancy, which leads to compensatory changes in auditory spatial tuning that tend to preserve the alignment between the neural representations of visual and auditory space. Adaptive changes also take place in sound localization behavior, as demonstrated by the fact that ferrets raised and tested with one ear plugged learn to localize as accurately as control animals. In both cases, these adjustments may involve greater use of monaural spectral cues provided by the other ear. Although plasticity in the auditory space map seems to be restricted to development, adult ferrets show some recovery of sound localization behavior after long-term monaural occlusion. The capacity for behavioral adaptation is, however, task dependent, because auditory spatial acuity and binaural unmasking (a measure of the spatial contribution to the "cocktail party effect") are permanently impaired by chronically plugging one ear, both in infancy but especially in adulthood. Experience-induced plasticity allows the neural circuitry underlying sound localization to be customized to individual characteristics, such as the size and shape of the head and ears, and to compensate for natural conductive hearing losses, including those associated with middle ear disease in infancy.
Hess, Christi; Zettler-Greeley, Cynthia; Godar, Shelly P; Ellis-Weismer, Susan; Litovsky, Ruth Y
Growing evidence suggests that children who are deaf and use cochlear implants (CIs) can communicate effectively using spoken language. Research has reported that age of implantation and length of experience with the CI play an important role in a predicting a child's linguistic development. In recent years, the increase in the number of children receiving bilateral CIs (BiCIs) has led to interest in new variables that may also influence the development of hearing, speech, and language abilities, such as length of bilateral listening experience and the length of time between the implantation of the two CIs. One goal of the present study was to determine how a cohort of children with BiCIs performed on standardized measures of language and nonverbal cognition. This study examined the relationship between performance on language and nonverbal intelligence quotient (IQ) tests and the ages at implantation of the first CI and second CI. This study also examined whether early bilateral activation is related to better language scores. Children with BiCIs (n = 39; ages 4 to 9 years) were tested on two standardized measures, the Test of Language Development and the Leiter International Performance Scale-Revised, to evaluate their expressive/receptive language skills and nonverbal IQ/memory. Hierarchical regression analyses were used to evaluate whether BiCI hearing experience predicts language performance. While large intersubject variability existed, on average, almost all the children with BiCIs scored within or above normal limits on measures of nonverbal cognition. Expressive and receptive language scores were highly variable, less likely to be above the normative mean, and did not correlate with Length of first CI Use, defined as length of auditory experience with one cochlear implant, or Length of second CI Use, defined as length of auditory experience with two cochlear implants. All children in the present study had BiCIs. Most IQ scores were either at or above that
Cameron, Sharon; Dillon, Harvey
The LiSN & Learn auditory training software was developed specifically to improve binaural processing skills in children with suspected central auditory processing disorder who were diagnosed as having a spatial processing disorder (SPD). SPD is defined here as a condition whereby individuals are deficient in their ability to use binaural cues to selectively attend to sounds arriving from one direction while simultaneously suppressing sounds arriving from another. As a result, children with SPD have difficulty understanding speech in noisy environments, such as in the classroom. To develop and evaluate the LiSN & Learn auditory training software for children diagnosed with the Listening in Spatialized Noise-Sentences Test (LiSN-S) as having an SPD. The LiSN-S is an adaptive speech-in-noise test designed to differentially diagnose spatial and pitch-processing deficits in children with suspected central auditory processing disorder. Participants were nine children (aged between 6 yr, 9 mo, and 11 yr, 4 mo) who performed outside normal limits on the LiSN-S. In a pre-post study of treatment outcomes, participants trained on the LiSN & Learn for 15 min per day for 12 weeks. Participants acted as their own control. Participants were assessed on the LiSN-S, as well as tests of attention and memory and a self-report questionnaire of listening ability. Performance on all tasks was reassessed after 3 mo where no further training occurred. The LiSN & Learn produces a three-dimensional auditory environment under headphones on the user's home computer. The child's task was to identify a word from a target sentence presented in background noise. A weighted up-down adaptive procedure was used to adjust the signal level of the target based on the participant's response. On average, speech reception thresholds on the LiSN & Learn improved by 10 dB over the course of training. As hypothesized, there were significant improvements in posttraining performance on the LiSN-S conditions
A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.
Barrow, Jane H; Baldwin, Carryl L
The impact of interference from irrelevant spatial versus verbal cues is investigated in an auditory spatial Stroop task, and individual differences in navigation strategy are examined as a moderating factor. Verbal-spatial cue conflict in the auditory modality has not been extensively studied, and yet the potential for such conflict can be high in certain settings, such as modern aircraft and automobile cockpits, where multiple warning systems and verbally delivered instructions may compete for the operator's spatial attention. Two studies are presented in which participants responded to either the semantic meaning or the spatial location of directional words, which were presented from congruent and incongruent locations. A subset was selected from the larger sample for additional analyses based on their navigation strategy. Results demonstrated greater interference when participants were responding to the spatial location and thus attempting to ignore conflicting semantic information. Participants with a verbal navigation strategy paralleled this finding. Conversely, highly spatial navigators responded faster to spatially relevant information but did not show corresponding interference when trying to ignore spatial information. The findings suggest that people have fundamentally different approaches to the use of auditory spatial information that manifest at the early level of orienting toward a single word or sound. When designing spatial information displays and warning systems, particularly those with an auditory component, designers should ensure that either verbal-directional or nonverbal-spatial information is utilized by all alerts to reduce interference. © 2014, Human Factors and Ergonomics Society.
Full Text Available Localization of objects and events in the environment is critical for survival, as many perceptual and motor tasks rely on estimation of spatial location. Therefore, it seems reasonable to assume that spatial localizations should generally be accurate. Curiously, some previous studies have reported biases in visual and auditory localizations, but these studies have used small sample sizes and the results have been mixed. Therefore, it is not clear (1 if the reported biases in localization responses are real (or due to outliers, sampling bias, or other factors, and (2 whether these putative biases reflect a bias in sensory representations of space or a priori expectations (which may be due to the experimental setup, instructions, or distribution of stimuli. Here, to address these questions, a dataset of unprecedented size (obtained from 384 observers was analyzed to examine presence, direction, and magnitude of sensory biases, and quantitative computational modeling was used to probe the underlying mechanism(s driving these effects. Data revealed that, on average, observers were biased towards the center when localizing visual stimuli, and biased towards the periphery when localizing auditory stimuli. Moreover, quantitative analysis using a Bayesian Causal Inference framework suggests that while pre-existing spatial biases for central locations exert some influence, biases in the sensory representations of both visual and auditory space are necessary to fully explain the behavioral data. How are these opposing visual and auditory biases reconciled in conditions in which both auditory and visual stimuli are produced by a single event? Potentially, the bias in one modality could dominate, or the biases could interact/cancel out. The data revealed that when integration occurred in these conditions, the visual bias dominated, but the magnitude of this bias was reduced compared to unisensory conditions. Therefore, multisensory integration not only
Scott, Brian H; Mishkin, Mortimer
Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.
Jackson, Thomas E; Sandramouli, Soupramanien
Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.
Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr
The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...
Sandro Franceschini; Piergiorgio Trevisan; Luca Ronconi; Sara Bertoni; Susan Colmar; Kit Double; Andrea Facoetti; Simone Gori
.... In our study, we tested reading skills and phonological working memory, visuo-spatial attention, auditory, visual and audio-visual stimuli localization, and cross-sensory attentional shifting in two...
Moerel, Michelle; De Martino, Federico; Santoro, Roberta; Ugurbil, Kamil; Goebel, Rainer; Yacoub, Essa; Formisano, Elia
We examine the mechanisms by which the human auditory cortex processes the frequency content of natural sounds. Through mathematical modeling of ultra-high field (7 T) functional magnetic resonance imaging responses to natural sounds, we derive frequency-tuning curves of cortical neuronal populations. With a data-driven analysis, we divide the auditory cortex into five spatially distributed clusters, each characterized by a spectral tuning profile. Beyond neuronal populations with simple sing...
Ghoul, Asila; Reichmuth, Colleen
Sea otters are threatened marine mammals that may be negatively impacted by human-generated coastal noise, yet information about sound reception in this species is surprisingly scarce. We investigated amphibious hearing in sea otters by obtaining the first measurements of absolute sensitivity and critical masking ratios. Auditory thresholds were measured in air and underwater from 0.125 to 40 kHz. Critical ratios derived from aerial masked thresholds from 0.25 to 22.6 kHz were also obtained. These data indicate that although sea otters can detect underwater sounds, their hearing appears to be primarily air adapted and not specialized for detecting signals in background noise.
Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko
In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.
Santos, Maria de Fátima Oliveira dos; Fernandes, Maria das Graças Melo; Oliveira, Harison José de
Receptiveness is a tool that brings contributions for health care humanization, especially with regard to its practice in the field of Anesthesiology. The aim was to investigate through the report of anesthesiologists the understanding of the receptiveness phenomenon among these professionals. This is a descriptive qualitative research held at the Hospital Universitário Lauro Wanderley (HULW), in the city of João Pessoa, PB. The sample consisted of 16 attending anesthesiologists, 25% female doctors and 75% male doctors. Data were collected through interviews guided by two semi-structured questions from September to October 2010. Data analysis was performed using the technique of Collective Subject Discourse (CSD). The anesthesiologists' answer to the following question was considered as this study's result: "What do you think about the practice of receptiveness as a strategy to humanize the doctor-patient relationship?" The doctors' CSD presented two central ideas: 1) a holistic approach to the patient; 2) a strategy that improves the doctor-patient relationship. When asked about the strategies adopted by them to humanize the relationship with the patient at the time of reception, their reports were organized based on three central ideas: 1) observation of patients' rights; 2) therapeutic communication; 3) preanesthetic visit. It was found that the physicians involved in the research recognized the value of receptiveness as a strategy to humanize the doctor-patient relationship. The receptivity to the patient in the course of anesthesia is very important because it allows the professional to perform a qualified hearing of the patient's history, together with the humanized care process, which enables the improvement of the interaction between doctor and patient. Copyright © 2012 Elsevier Editora Ltda. All rights reserved.
Anatolii M. Malivskyi
Full Text Available Purpose. The article is aimed to figure out the features of Husserl's reception of anthropological Descartes rationalism. Its implementation requires a consistent solution of the following tasks: 1 schematically express a modern vision of the basic intentions of philosophizing as an anthropological rationalism; 2 highlight the main points of the Husserl's reception of Descartes’ rationalism as the deanthropologizing and analyze radicalization of its basic design as the reanthropologizing. Conclusions. When clarifying the question of the method of reception and completion of the philosophical Descartes’ project in the doctrine of Edmund Husserl, the author finds that the originality of his reception of anthropological Descartes’ rationalism appears as the paradoxical union of denying the existence of anthropology in the base project and the rediscovery of its key role in the radicalization of Descartes. Thinking of its way, he comes to the rediscovery some of the key ideas of the French philosopher, rooted in his anthropological rationalism. Among them is the basic intention of the ambivalence, the rejection of Descartes’ ideas of panrationalism, recognition irreducibility of philosophical method to the mathematical, constitutive of human presence in the new rationalism. Prospects for further research in understanding the author sees a meaningful relationship and continuity of the two great thinkers - namely, the personal nature of philosophizing and the ethical focus of their searching. Originality. Appeal to the reception of Husserl's Cartesian project confirms the thesis of an essential importance for the basic anthropological project of Descartes. The presented version of Husserl reception base project Descartes is a reproduction of the surface age stereotypes, which link the quest of the philosopher with the natural sciences and neglected anthropological measurements. The proposed version of the radicalization of Descartes’ project
Full Text Available The theoretical focus of this paper is the context of reception experienced by migrants in their new homeland. In particular we examine relations between established residents and newcomers or immigrants from Cuba, Haiti, and other Caribbean and Latin American nations in South Florida. Based upon long term fieldwork among late adolescents and young adults, we develop a framework and give ethnographic examples of established resident-newcomer relations that influence the contexts of reception for immigrants in South Florida. These contexts range from positive to negative, vary between national and local settings, and change over time. URN: urn:nbn:de:0114-fqs0903156
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
Objectives This study (a) examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of three contrasting types of background music, and (b) compared performance based upon listener groups: CI recipients using conventional long-electrode (LE) devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing (NH) adults. Methods We tested 154 LE CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 NH adults on closed-set recognition of spondees presented in three contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Outcomes Signal-to-noise thresholds for speech in music (SRTM) were examined in relation to measures of speech recognition in background noise and multi-talker babble, pitch perception, and music experience. Results SRTM thresholds varied as a function of category of background music, group membership (LE, Hybrid, NH), and age. Thresholds for speech in background music were significantly correlated with measures of pitch perception and speech in background noise thresholds; auditory status was an important predictor. Conclusions Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music. PMID:23342550
Gfeller, Kate; Turner, Christopher; Oleson, Jacob; Kliethermes, Stephanie; Driscoll, Virginia
This study examined speech recognition abilities of cochlear implant (CI) recipients in the spectrally complex listening condition of 3 contrasting types of background music, and compared performance based upon listener groups: CI recipients using conventional long-electrode devices, Hybrid CI recipients (acoustic plus electric stimulation), and normal-hearing adults. We tested 154 long-electrode CI recipients using varied devices and strategies, 21 Hybrid CI recipients, and 49 normal-hearing adults on closed-set recognition of spondees presented in 3 contrasting forms of background music (piano solo, large symphony orchestra, vocal solo with small combo accompaniment) in an adaptive test. Signal-to-noise ratio thresholds for speech in music were examined in relation to measures of speech recognition in background noise and multitalker babble, pitch perception, and music experience. The signal-to-noise ratio thresholds for speech in music varied as a function of category of background music, group membership (long-electrode, Hybrid, normal-hearing), and age. The thresholds for speech in background music were significantly correlated with measures of pitch perception and thresholds for speech in background noise; auditory status was an important predictor. Evidence suggests that speech reception thresholds in background music change as a function of listener age (with more advanced age being detrimental), structural characteristics of different types of music, and hearing status (residual hearing). These findings have implications for everyday listening conditions such as communicating in social or commercial situations in which there is background music.
Full Text Available Auditory Scene Analysis provides a useful framework for understanding atypical auditory perception in autism. Specifically, a failure to segregate the incoming acoustic energy into distinct auditory objects might explain the aversive reaction autistic individuals have to certain auditory stimuli or environments. Previous research with non-autistic participants has demonstrated the presence of an Object Related Negativity (ORN in the auditory event related potential that indexes pre-attentive processes associated with auditory scene analysis. Also evident is a later P400 component that is attention dependent and thought to be related to decision-making about auditory objects. We sought to determine whether there are differences between individuals with and without autism in the levels of processing indexed by these components. Electroencephalography (EEG was used to measure brain responses from a group of 16 autistic adults, and 16 age- and verbal-IQ-matched typically-developing adults. Auditory responses were elicited using lateralized dichotic pitch stimuli in which inter-aural timing differences create the illusory perception of a pitch that is spatially separated from a carrier noise stimulus. As in previous studies, control participants produced an ORN in response to the pitch stimuli. However, this component was significantly reduced in the participants with autism. In contrast, processing differences were not observed between the groups at the attention-dependent level (P400. These findings suggest that autistic individuals have difficulty segregating auditory stimuli into distinct auditory objects, and that this difficulty arises at an early pre-attentive level of processing.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Carrat, R; Thillier, J L; Durivault, J
The liminal auditory threshold for white noise and for coloured noise was determined from a statistical survey of a group of 21 young people with normal hearing. The normal auditory threshold for white noise with a spectrum covering the whole of the auditory field is between -- 0.57 dB +/- 8.78. The normal auditory threshold for bands of filtered white noise (coloured noise with a central frequency corresponding to the pure frequencies usually employed in tonal audiometry) describes a typical curve which, instead of being homothetic to the usual tonal curves, sinks to low frequencies and then rises. The peak of this curve is replaced by a broad plateau ranging from 750 to 6000 Hz and contained in the concavity of the liminal tonal curves. The ear is therefore less sensitive but, at limited acoustic pressure, white noise first impinges with the same discrimination upon the whole of the conversational zone of the auditory field. Discovery of the audiometric threshold for white noise constitutes a synthetic method of measuring acuteness of hearing which considerably reduces the amount of manipulation required.
Hamker, Fred H; Zirnsak, Marc
Visual attention is generally considered to facilitate the processing of the attended stimulus. Its mechanisms, however, are still under debate. We have developed a systems-level model of visual attention which predicts that attentive effects emerge by the interactions between different brain areas. Recent physiological studies have provided evidence that attention also alters the receptive field structure. For example, V4 receptive fields typically shrink and shift towards the saccade target around saccade onset. We show that receptive field dynamics are inherently predicted by the mechanism of feedback in our model. According to the model an oculomotor feedback signal from an area involved in the competition for the saccade target location, e.g. the frontal eye field, enhances the gain of V4 cells. V4 receptive field dynamics can be observed after pooling the gain modulated responses to obtain a certain degree of spatial invariance. The time course of the receptive field dynamics in the model resemble those obtained from macaque V4.
Soveri, Anna; Tallus, Jussi; Laine, Matti; Nyberg, Lars; Bäckman, Lars; Hugdahl, Kenneth; Tuomainen, Jyrki; Westerhausen, René; Hämäläinen, Heikki
We studied the effects of training on auditory attention in healthy adults with a speech perception task involving dichotically presented syllables. Training involved bottom-up manipulation (facilitating responses from the harder-to-report left ear through a decrease of right-ear stimulus intensity), top-down manipulation (focusing attention on the left-ear stimuli through instruction), or their combination. The results showed significant training-related effects for top-down training. These effects were evident as higher overall accuracy rates in the forced-left dichotic listening (DL) condition that sets demands on attentional control, as well as a response shift toward left-sided reports in the standard DL task. Moreover, a transfer effect was observed in an untrained auditory-spatial attention task involving bilateral stimulation where top-down training led to a relatively stronger focus on left-sided stimuli. Our results indicate that training of attentional control can modulate the allocation of attention in the auditory space in adults. Malleability of auditory attention in healthy adults raises the issue of potential training gains in individuals with attentional deficits.
Full Text Available Size perception can be influenced by several visual cues, such as spatial (e.g., depth or vergence and temporal contextual cues (e.g., adaptation to steady visual stimulation. Nevertheless, perception is generally multisensory and other sensory modalities, such as auditory, can contribute to the functional estimation of the size of objects. In this study, we investigate whether auditory stimuli at different sound pitches can influence visual size perception after visual adaptation. To this aim, we used an adaptation paradigm (Pooresmaeili et al., 2013 in three experimental conditions: visual-only, visual-sound at 100 Hz and visual-sound at 9,000 Hz. We asked participants to judge the size of a test stimulus in a size discrimination task. First, we obtained a baseline for all conditions. In the visual-sound conditions, the auditory stimulus was concurrent to the test stimulus. Secondly, we repeated the task by presenting an adapter (twice as big as the reference stimulus before the test stimulus. We replicated the size aftereffect in the visual-only condition: the test stimulus was perceived smaller than its physical size. The new finding is that we found the auditory stimuli have an effect on the perceived size of the test stimulus after visual adaptation: low frequency sound decreased the effect of visual adaptation, making the stimulus perceived bigger compared to the visual-only condition, and contrarily, the high frequency sound had the opposite effect, making the test size perceived even smaller.
The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)
Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.
Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals
Learning to relate with people in their own style is important in helping to understand why they react the way they do. The purpose of this study therefore was to determine the differences in the receptive learning styles of introverts, ambiverts and extroverts in Senior High Schools (SHS) in the Sekondi-Takoradi Metropolis, ...
Benjamin R. Harris
Full Text Available The publication of educational standards inspires a variety of responses---from wholesale acceptance and deployment to criticism and blame. The author of this paper contends that the revision of the ACRL’s Information Literacy Competency Standards for Higher Education must be accompanied by a critical, conscious, and conscientious reception by librarians and information literacy advocates.
Harris, Benjamin R.
The publication of educational standards inspires a variety of responses---from wholesale acceptance and deployment to criticism and blame. The author of this paper contends that the revision of the ACRL’s Information Literacy Competency Standards for Higher Education must be accompanied by a critical, conscious, and conscientious reception by librarians and information literacy advocates.
The aim of this paper is to describe Dewey's reception in the Spanish-speaking countries that constitute the Hispanic world. Without any doubt, it can be said that in the past century Spain and the countries of South America have been a world apart, lagging far behind the mainstream Western world. It includes a number of names and facts about the…
Adachi-Mejia, Anna M.; Sutherland, Lisa A.; Longacre, Meghan R.; Beach, Michael L.; Titus-Ernstoff, Linda; Gibson, Jennifer J.; Dalton, Madeline A.
Objective: This study examined the relationship between adolescent weight status and food advertisement receptivity. Design: Survey-based evaluation with data collected at baseline (initial and at 2 months), and at follow-up (11 months). Setting: New Hampshire and Vermont. Participants: Students (n = 2,281) aged 10-13 in 2002-2005. Main Outcome…
The purpose of the present study was to understand the reciprocal, bidirectional longitudinal relation between joint book reading and English receptive vocabulary. To address the research goals, a nationally representative sample of Head Start children, the Head Start Family and Child Experiences Survey (2003 cohort), was used for analysis. The…
, they were willing/not willing to utilise the distance learning mode to access university education. The analysis and tests of hypotheses focused strictly on receptivity to distance learning in relation to age group, gender, marital status, number of ...
This article presents findings from an empirical survey conducted at the Nelson Mandela University Refugee Rights Centre based at the Nelson Mandela Metropolitan University in Port Elizabeth, to establish the perceptions and experiences of refugees/asylum seekers of the Refugee Reception Centre in Port Elizabeth in ...
Boyle, James; McCartney, Elspeth; O'Hare, Anne; Law, James
Studies indicate that language impairment that cannot be accounted for by factors such as below-average non-verbal ability, hearing impairment, behaviour or emotional problems, or neurological impairments affects some 6% of school-age children. Language impairment with a receptive language component is more resistant to intervention than specific…
Buitendag, M; Uys, I; Louw, B
This article focuses on the psychometric characteristics of "Die Afrikaanse Reseptiewe Woordeskattoets (ARW)". The psychometric analysis indicates the test forms (A and B) to be equivalent, reliable and valid and that the ARW can thus be used with confidence as a screening and re-evaluation device for the evaluation of receptive vocabulary as well as intelligence.
Batista, Roselene; Horst, Marlise
Researchers have developed several tests of receptive vocabulary knowledge suitable for use with learners of English, but options are few for learners of French. This situation motivated the authors to create a new vocabulary size measure for French, the "Test de la taille du vocabulaire" (TTV). The measure is closely modelled on…
and because it has been and still is considered peripheral and sectarian. This volume presents a critical edition of an anonymous Karaite commentary on the Book of Jeremiah presented in both the original Arabic and in English translation. The volume uses this text to examine the commonalities and differences...... between the Rabbinate and the Karaite reception and interpretation of the Hebrew Bible....
Heusinkveld, H.S.; Benders, J.G.J.M.
The business community is continuously confronted with allegedly new concepts. These are often temporarily intensely advocated, yet are at the same time likely to be portrayed as transitory or ‘faddish’ phenomena. To trace the reception of these concepts, this paper examines the Dutch discourse on
Nadal, Marcos; Vartanian, Oshin; Skov, Martin
We commend Menninghaus et al. for tackling the role of negative emotions in art reception. However, their model suffers from shortcomings that reduce its applicability to empirical studies of the arts: poor use of evidence, lack of integration with other models, and limited derivation of testable...
DESCRIPTION: The Karaites emerged as a school of thought within Middle Eastern Judaism in the 8th century. The Karaites were a “reading community” whose intellectual activity and daily lives were based around the divine scriptures. Over time Karaism became one of the two main competing schools of...... between the Rabbinate and the Karaite reception and interpretation of the Hebrew Bible....
Gasparini, Clelia; Andreatta, Gabriele; Pilastro, Andrea
The females of several internal fertilizers are able to store sperm for a long time, reducing the risk of sperm limitation. However, it also means that males can attempt to mate outside females' receptive period, potentially increasing the level of sperm competition and exacerbating sexual conflict over mating. The guppy ( Poecilia reticulata), an internally fertilizing fish, is a model system of such competition and conflict. Female guppies accept courtship and mate consensually only during receptive periods of the ovarian cycle but receive approximately one (mostly forced) mating attempt per minute both during and outside their sexually receptive phase. In addition, females can store viable sperm for months. We expected that guppy females would disfavour sperm received during their unreceptive period, possibly by modulating the quality and/or quantity of the components present in the ovarian fluid (OF) over the breeding cycle. Ovarian fluid has been shown to affect sperm velocity, a determinant of sperm competition success in this and other fishes. We found that in vitro sperm velocity is slower in OF collected from unreceptive females than in OF from receptive females. Visual stimulation with a potential partner prior to collection did not significantly affect in vitro sperm velocity. These results suggest that sperm received by unreceptive females may be disfavoured as sperm velocity likely affects the migration process and the number of sperm that reach storage sites.
Grigorescu, Cosmin; Petkov, Nicolai; Westenberg, Michel A.
We propose a biologically motivated computational step, called nonclassical receptive field (non-CRF) inhibition, more generally surround inhibition or suppression, to improve contour detection in machine vision. Non-CRF inhibition is exhibited by 80% of the orientation-selective neurons in the
Grigorescu, C; Petkov, N; Westenberg, MA; Bulthoff, HH; Lee, SW; Poggio, TA; Wallraven, C
We propose a biologically motivated computational step, called non-classical receptive field (non-CRF) inhibition, to improve the performance of contour detectors. We introduce a Gabor energy operator augmented with non-CRF inhibition, which we call the bar cell operator. We use natural images with
Tumin, Anatoli; Edwards, Luke
Receptivity of high-speed boundary layers is considered within the framework of fluctuating hydrodynamics where stochastic forcing is introduced through fluctuating shear stress and heat flux stemming from kinetic fluctuations (thermal noise). The forcing generates unstable modes whose amplification downstream and may lead to transition. An example of high-enthalpy (16 . 53 MJ / kg) boundary layer at relatively low wall temperatures (Tw = 1000 K - 3000 K), free stream temperature (Te = 834 K), and low pressure (0 . 0433 atm) is considered. Dissociation at the chosen flow parameters is still insignificant. The stability and receptivity analyses are carried out using a solver for calorically perfect gas with effective Prandtl number and specific heats ratio. The receptivity phenomenon is unchanged by the inclusion of real gas effects in the mean flow profiles. This is attributed to the fact that the mechanism for receptivity to kinetic fluctuations is localized near the upper edge of the boundary layer. Amplitudes of the generated wave packets are larger downstream in the case including real gas effects. It was found that spectra in both cases include supersonic second Mack unstable modes despite the temperature ratio Tw /Te > 1 . Supported by AFOSR.
Dietz, Anthony; Sheehan, Daniel; Davis, Sanford (Technical Monitor)
The receptivity of a laminar boundary layer to an isolated three-dimensional convected disturbance is investigated in a low-speed wind tunnel experiment. The disturbance is created by the short-duration pulsed displacement of a small low-aspect-ratio wing located upstream of a flat plate. The height of the wing is set so that the convected disturbance grazes the edge of the flat-plate boundary layer. A receptivity site is provided by a two-dimensional roughness strip on the surface of the plate. The different propagation speeds of acoustic, convected and instability waves cause the various wave packets from the pulsed displacement to arrive at a downstream measurement station at different times, separating the phenomena and allowing them to be studied independently. Ensemble- averaged measurements are made with and without roughness on the plate. Preliminary analysis of the measurements suggest the presence of a two-dimensional T-S wave packet arising from an interaction between an acoustic wave and the roughness, and a three-dimensional T-S wave packet arising from an interaction between the localized convected disturbance and the roughness strip. The growth rates and spatial characteristics of the disturbances and the instability wave packets are measured as they propagate downstream.
Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan
Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.
Lunney, David; Morrison, Robert C.
Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).
Chen, Sufen; Sussman, Elyse S.
The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958
Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.
Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond
After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.
Garrido-Gómez, Tamara; Quiñonero, Alicia; Antúnez, Oreto; Díaz-Gimeno, Patricia; Bellver, Jose; Simón, Carlos; Domínguez, Francisco
Are there any proteomic differences between receptive (R) and non-receptive (NR) endometrial receptivity array (ERA)-diagnosed endometria obtained on the same day of a hormonal replacement therapy (HRT) treatment cycle? There is a different proteomic signature between R and NR ERA-diagnosed endometrium obtained on the same day of HRT cycles. The human endometrial transcriptome has been extensively investigated in the last decade resulting in the development of a new diagnostic test based on the transcriptomic signature of the window of implantation (WOI). Much less is known about the proteomics derived from the transcripts present during the WOI. This study was a basic proteomic analysis of human endometrial biopsies taken from twelve IVF patients. Human endometrial biopsies were collected during HRT cycles after 5 days of progesterone (P) administration, and diagnosed as receptive (R; n = 6) or non-receptive (NR; n = 6) by the ERA test. Endometrial proteins were extracted, labelled and separated using differential in-gel electrophoresis (DIGE). Proteins were identified using mass spectrometry, followed up by in silico analysis. Validation studies using western blots and immunolocalization were performed for the progesterone receptor membrane component 1 (PGRMC1) and annexin A6 (ANXA6) proteins. DIGE analysis followed by protein identification by MALDI-MS and database searches revealed 24 differentially expressed proteins in R versus NR samples. In silico analysis showed two pathways which were significantly different between R and NR samples. Expression of PGRMC1 and ANXA6 was validated and localized by western blots and immunohistochemistry. These results highlight these proteins as key targets likely to be important in the comprehension of human endometrial receptivity. This was mainly a descriptive study with no functional studies on the proteins found. We also used a low number of human endometrial samples for the DIGE analysis. This study identified the
Brad K. Blitz
Full Text Available The arrival of more than one million migrants, many of them refugees, has proved a major test for the European Union. Although international relief and monitoring agencies have been critical of makeshift camps in Calais and Eidomeni where infectious disease and overcrowding present major health risks, few have examined the nature of the official reception system and its impact on health delivery. Drawing upon research findings from an Economic and Social Research Council (ESRC funded project, this article considers the physical and mental health of asylum–seekers in transit and analyses how the closure of borders has engendered health risks for populations in recognised reception centres in Sicily and in Greece. Data gathered by means of a survey administered in Greece (300 and in Sicily (400, and complemented by in-depth interviews with migrants (45 and key informants (50 including representatives of government offices, humanitarian and relief agencies, NGOs and activist organisations, are presented to offer an analysis of the reception systems in the two frontline states. We note that medical provision varies significantly from one centre to another and that centre managers play a critical role in the transmission of vital information. A key finding is that, given such disparity, the criteria used by the UNHCR to grade health services reception do not address the substantive issue that prevent refugees from accessing health services, even when provided on site. Health provision is not as recorded in UNHCR reporting but rather there are critical gaps between provision, awareness, and access for refugees in reception systems in Sicily and in Greece. This article concludes that there is a great need for more information campaigns to direct refugees to essential services.
Full Text Available Representing an intuitive spelling interface for Brain-Computer Interfaces (BCI in the auditory domain is not straightforward. In consequence, all existing approaches based on event-related potentials (ERP rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N=21 was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multiclass Spatial ERP. The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 characters/minute (7.55 bits/minute could be reached during the second session (average: .94 char/min, 5.26 bits/min. For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step towards a purely auditory BCI.
Schreuder, Martijn; Rost, Thomas; Tangermann, Michael
Representing an intuitive spelling interface for brain-computer interfaces (BCI) in the auditory domain is not straight-forward. In consequence, all existing approaches based on event-related potentials (ERP) rely at least partially on a visual representation of the interface. This online study introduces an auditory spelling interface that eliminates the necessity for such a visualization. In up to two sessions, a group of healthy subjects (N = 21) was asked to use a text entry application, utilizing the spatial cues of the AMUSE paradigm (Auditory Multi-class Spatial ERP). The speller relies on the auditory sense both for stimulation and the core feedback. Without prior BCI experience, 76% of the participants were able to write a full sentence during the first session. By exploiting the advantages of a newly introduced dynamic stopping method, a maximum writing speed of 1.41 char/min (7.55 bits/min) could be reached during the second session (average: 0.94 char/min, 5.26 bits/min). For the first time, the presented work shows that an auditory BCI can reach performances similar to state-of-the-art visual BCIs based on covert attention. These results represent an important step toward a purely auditory BCI.
Bratakos, M S; Reed, C M; Delhorne, L A; Denesvich, G
The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These
release; distribution is unlimited. ii REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704-0188 Public reporting burden for this...steady-state acoustic threats. This has led to research into the effects of various types of headgear on directional sound detection, auditory...research infrastructure available at ARL-HRED includes a unique world- class multispace auditory spatial perception laboratory, the Environment for
Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde
Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.
Bollywood films are increasingly drawing scholarly attention for their global appeal and reception. Transnational studies have examined the reception of Bollywood in Australia, Britain, Scotland, South Africa, Russia, the United States of America, Bangladesh and Nepal. However, academic work on the Southeast Asian reception of these films is scarcer. This research seeks to fill this gap by looking at the reception of Bollywood in Malaysia from 1991-2012. The thesis adopts a...
Marcella de Castro Campos Velten
Full Text Available Spatial region concepts such as front, back, left and right reflect our typical interaction with space, and the corresponding surrounding regions have different statuses in memory. We examined the representation of spatial directions in the auditory space, specifically in how far natural response actions, such as orientation movements towards a sound source, would affect the categorization of egocentric auditory space. While standing in the middle of a circle with 16 loudspeakers, participants were presented acoustic stimuli coming from the loudspeakers in randomized order, and verbally described their directions by using the concept labels front, back, left, right, front-right, front-left, back-right and back-left. Response actions varied in three blocked conditions: 1 facing front, 2 turning the head and upper body to face the stimulus, and 3 turning the head and upper body plus pointing with the hand and outstretched arm towards the stimulus. In addition to a protocol of the verbal utterances, motion capture and video recording generated a detailed corpus for subsequent analysis of the participants’ behavior. Chi-square tests revealed an effect of response condition for directions within the left and right sides. We conclude that movement-based response actions influence the representation of auditory space, especially within the sides’ regions.
Reviews various evidences on the relationship between age and the reception of major innovations in science. Examines the possibility that age patterning of reception may vary over time. Reports the potential importance of age on the reception of ideas while rejecting the presumption that advanced age leads to increased resistance. (YP)
Cornish, K. M.; Munir, F.
Receptive and expressive language skills were assessed in 13 British children (ages 4-14) with cri-du-chat syndrome. Results found a discrepancy between the children's chronological ages and their presumed language ages and a receptive-expressive discrepancy, with reduced expressive skills compared to receptive skills. Remediation that focuses on…
Beal-Alvarez, Jennifer S.
This article presents receptive and expressive American Sign Language skills of 85 students, 6 through 22 years of age at a residential school for the deaf using the American Sign Language Receptive Skills Test and the Ozcaliskan Motion Stimuli. Results are presented by ages and indicate that students' receptive skills increased with age and…
Blom, Jan Dirk; Sommer, Iris E. C.
Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an
Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.
Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923
Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor
Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)
... Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is Voice? What Is Speech? What Is Language? ... communication provides better outcomes for children with cochlear implants University of Texas at Dallas ...
Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.
Kaongoen, Netiwit; Jo, Sungho
Brain-computer interface (BCI) is a technology that provides an alternative way of communication by translating brain activities into digital commands. Due to the incapability of using the vision-dependent BCI for patients who have visual impairment, auditory stimuli have been used to substitute the conventional visual stimuli. This paper introduces a hybrid auditory BCI that utilizes and combines auditory steady state response (ASSR) and spatial-auditory P300 BCI to improve the performance for the auditory BCI system. The system works by simultaneously presenting auditory stimuli with different pitches and amplitude modulation (AM) frequencies to the user with beep sounds occurring randomly between all sound sources. Attention to different auditory stimuli yields different ASSR and beep sounds trigger the P300 response when they occur in the target channel, thus the system can utilize both features for classification. The proposed ASSR/P300-hybrid auditory BCI system achieves 85.33% accuracy with 9.11 bits/min information transfer rate (ITR) in binary classification problem. The proposed system outperformed the P300 BCI system (74.58% accuracy with 4.18 bits/min ITR) and the ASSR BCI system (66.68% accuracy with 2.01 bits/min ITR) in binary-class problem. The system is completely vision-independent. This work demonstrates that combining ASSR and P300 BCI into a hybrid system could result in a better performance and could help in the development of the future auditory BCI. Copyright © 2017 Elsevier B.V. All rights reserved.
Almudena Fernández Fontecha
Full Text Available Content and Language Integrated Learning (CLIL is a widely researched approach to foreign language learning and teaching. One of the pillars of CLIL is the concept of motivation. Some studies have focused on exploring motivation within CLIL, however there has not been much discussion about the connection between motivation, or other affective factors, and each component of foreign language learning. Hence, given two groups of learners with the same hours of EFL instruction, the main objective of this research is to determine whether there exists any kind of interaction between the number of words learners know receptively and their motivation towards English as a Foreign Language (EFL. Most students in both groups were highly motivated. No relationship was identified between the receptive vocabulary knowledge and the general motivation for the secondary graders but a positive significant relationship was found for the primary CLIL graders. Several reasons will be adduced.
Glick, Thomas F.
The subfield of Darwin studies devoted to comparative reception coalesced around 1971 with the planning of a conference on the subject, at the University of Texas at Austin held in April 1972. The original focus was western Europe, Russia and the United States. Subsequently a spate of studies on the Italian reception added to the Eurocentric focus. The center of activity then switched to Latin America where a group of scholars coalesced in the mid 1990s, seemingly related to the maturation of the history of science as a discrete discipline there somewhat earlier. When interest in Europe revived during the last decade, the center of gravity had moved both eastward, to the former Society bloc countries (a reflection of the institutionalization of the history of science there), and north to Scandinavia. Recently, interest in the topic has emerged in the Islamic World. The subtext of the expansion of this topic is modernization.
Full Text Available The paper discusses the reception of the work of Jovan Cvijić in Slovenian ethnology. Cvijić is considered to be one of the founding fathers of Serbian ethnology, due in large part to his anthropogeographical orientation that strongly marked ethnological research in Serbia until the second half of the 20th century. In Slovenian ethnology, the so-called anthropogeographical school is virtually unknown; however, some of its tenets can be recognized or were actively applied in research of cultural areas, carried out by geographers and ethnographers before and after the Second World War when anthropogeography was considered to be a branch of geography and a discipline akin to ethnography/ethnology. The author aims to discuss when, for whom and in what way was Jovan Cvijić direct or indirect reference within the horizon of Slovenian ethnology. His reception is marked by acknowledging the powerful influence of his political views and engagement on his scholarship.
Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia
In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.
Li, Shu-Yun; Song, Zhuo; Song, Min-Jie; Qin, Jia-Wen; Zhao, Meng-Long; Yang, Zeng-Ming
Polycystic ovary syndrome (PCOS), a complex endocrine disorder, is a leading cause of female infertility. An obvious reason for infertility in PCOS women is anovulation. However, success rate with high quality embryos selected by assisted reproduction techniques in PCOS patients still remain low with a high rate of early clinical pregnancy loss, suggesting a problem in uterine receptivity. Using a dehydroepiandrosterone-induced mouse model of PCOS, some potential causes of decreased fertility...
Kuga, Nobuhiro; Arai, H; Goto, N
This paper presents a notch-wire composite antenna for polarization diversity reception in an indoor base-station system, A three-notched disk antenna and a wire antenna are proposed as component antennas for the horizontal and the vertical polarization, respectively. These component antennas are unified as a single composite diversity antenna by mounting the wire antenna on the notched disk. Antenna characteristics are calculated using the method of moments (MoM) with wire grid models and ex...
Carter, B Elijah; Conn, Caitlin C; Wiles, Jason R
Due to a phenomenon known as the 'backfire effect', intuition-based opinions can be inadvertently strengthened by evidence-based counterarguments. Students' views on genetically modified organisms (GMOs) may be subject to this effect. We explored the impact of an empathetically accessible topic, world hunger, on receptivity to GMO technology as an alternative to direct evidence-based approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Interruptions have a profound impact on our attentional orientation in everyday life. Recent advances in mobile information technology increase the number of potentially disruptive notifications on mobile devices by an increasing availability of services. Understanding the contextual intricacies that make us receptive to these interruptions is paramount to devising technology that supports interruption management. This thesis makes a number of contributions to the methodology of studying ...
Full Text Available K-pop’s popularity and its participatory fan culture have expanded beyond Asia and become significant in Europe in the past few years. After South Korean pop singer Psy’s “Gangnam Style” music video topped the Austrian chart in October 2012, the number and size of K-pop events in Austria sharply increased, with fans organizing various participatory events, including K-pop auditions, dance festivals, club meetings, quiz competitions, dance workshops, and smaller fan-culture gatherings. In the private sector, longtime fans have transitioned from participants to providers, and in the public sector, from observers to sponsors. Through in-depth interviews with event organizers, sponsors, and fans, this article offers an ethnographic study of the reception of K-pop in Europe that takes into consideration local interactions between fans and Korean sponsors, perspectives on the genre, patterns of social integration, and histories. As a case study, this research stresses the local situatedness of K-pop fan culture by arguing that local private and public sponsors and fans make the reception of K-pop different in each locality. By exploring local scenes of K-pop reception and fan culture, the article demonstrates the rapidly growing consumption of K-pop among Europeans and stresses multidirectional understandings of globalization.
Full Text Available OBJECTIVE: To investigate the effects of receptive music therapy in clinical practice. METHODS: Receptive music therapy was individually applied via musical auditions, including five stages: musical stimulation, sensation, situation, reflection, and behavioral alteration. Following anamnesis and obtainment of consent, patients answered a first questionnaire on health risk evaluation (Q1, and after participating in 16 weekly music therapy sessions, answered a second one (Q2. RESULTS: Two men and 8 women, aged above 18 years, referred to us due to symptoms of stress, emotional suffering, and the need to change lifestyles (health risk behavior were studied between August 1998 and December 1999. Comparison between answers to Q1 and Q2, showed a trend (P=0.059 for reduction of ingestion of cholesterol-rich foods and for increased prospects in life with a tendency towards improvement, and also of increased intake of fiber-rich food (55.6%, increased levels of personal satisfaction (44.5%, and decreased levels of stress (66.7%. CONCLUSION: The study demonstrated decreased stress levels and increased personal satisfaction, higher consumption of fiber-rich food, lower cholesterol intake, and a better perspective on life, suggesting that receptive music therapy may be applied in clinical practice as an auxiliary therapeutic intervention for the treatment of behavioral health risks.
Rosslau, Ken; Steinwede, Daniel; Schröder, C; Herholz, Sibylle C; Lappe, Claudia; Dobel, Christian; Altenmüller, Eckart
There is a long tradition of investigating various disorders of musical abilities after stroke. These impairments, associated with acquired amusia, can be highly selective, affecting only music perception (i.e., receptive abilities/functions) or expression (music production abilities), and some patients report that these may dramatically influence their emotional state. The aim of this study was to systematically test both the melodic and rhythmic domains of music perception and expression in left- and right-sided stroke patients compared to healthy subjects. Music perception was assessed using rhythmic and melodic discrimination tasks, while tests of expressive function involved the vocal or instrumental reproduction of rhythms and melodies. Our approach revealed deficits in receptive and expressive functions in stroke patients, mediated by musical expertise. Those patients who had experienced a short period of musical training in childhood and adolescence performed better in the receptive and expressive subtests compared to those without any previous musical training. While discrimination of specific musical patterns was unimpaired after a left-sided stroke, patients with a right-sided stroke had worse results for fine melodic and rhythmic analysis. In terms of expressive testing, the most consistent results were obtained from a test that required patients to reproduce sung melodies. This implies that the means of investigating production abilities can impact the identification of deficits.
Full Text Available Reading is a process of aesthetically appreciative receptive to emphasize critical-creative reading activities. Metacognitively students understand, address any and explore the idea of the author in the text. Students responded, criticize, and evaluate the author's ideas in the text. At this stage, students can construct their post read text into other forms (new text. The aim of this strategy equips students to understand the meaning of the story, explore ideas, responding critically, and creatively pouring backstory idea. Reading strategies aesthetically-critical-creative receptive grabbed cognitive, effective, and psychomotor toward literacy critical reading and creative writing. Read appreciative included into the activities of reading comprehension. This activity involves the sensitivity and ability to process aesthetically-receptive reading and critical-creative. Readers imagination roam the author to obtain meaningful understanding and experience of reading. Some models of reading comprehension proposed experts covering the steps before reading, when reading, and after reading. At that stage to enable students after reading thinking abilities. Activities that can be done at this stage, for example, examine the back story, retell, make drawings, diagrams, or maps the concept of reading, as well as making a road map that describes the event. Other activities that can be done is to transform our student's text stories through reinforcement form illustrated stories into comic book form, for example (transliteration.
Epler, Amee J.; Sher, Kenneth J.; Loomis, Tiffany B.; O'Malley, Stephanie S.
Objective Heavy episodic drinking remains a significant problem on college campuses. Although most interventions for college students are behavioral, pharmacological treatments, such as naltrexone, could provide additional options. Participants The authors evaluated receptivity to various alcohol treatment options in a general population of college student drinkers (N = 2,084), assessed in 2005. Methods The authors asked participants to indicate which of 8 treatment options (ie, sell-help book, self-help computer program, self-help group, group therapy, individual therapy, monthly injection, targeted oral medication, or daily oral medication) they would be willing to consider if they were going to cut down on or stop drinking. Results Over 50% of drinkers expressed receptiveness to self-help options or psychotherapy options, and over 25% of drinkers expressed receptiveness to medication options. Conclusions Increasing treatment options for students interested in reducing or stopping drinking by offering pharmacological interventions such as naltrexone could provide an important unmet need among college students. PMID:19592350
Nalom, Ana Flávia de Oliveira; Soares, Aparecido José Couto; Cárnio, Maria Silvia
To characterize the performance of students from the 5th year of primary school, with and without indicatives of reading and writing disorders, in receptive vocabulary and reading comprehension of sentences and texts, and to verify possible correlations between both. This study was approved by the Research Ethics Committee of the institution (no. 098/13). Fifty-two students in the 5th year from primary school, with and without indicatives of reading and writing disorders, and from two public schools participated in this study. After signing the informed consent and having a speech therapy assessment for the application of inclusion criteria, the students were submitted to a specific test for standardized evaluation of receptive vocabulary and reading comprehension. The data were studied using statistical analysis through the Kruskal-Wallis test, analysis of variance techniques, and Spearman's rank correlation coefficient with level of significance to be 0.05. A receiver operating characteristic (ROC) curve (was constructed in which reading comprehension was considered as gold standard. The students without indicatives of reading and writing disorders presented a better performance in all tests. No significant correlation was found between the tests that evaluated reading comprehension in either group. A correlation was found between reading comprehension of texts and receptive vocabulary in the group without indicatives. In the absence of indicatives of reading and writing disorders, the presence of a good range of vocabulary highly contributes to a proficient reading comprehension of texts.
Prime, Heather; Pauker, Sharon; Plamondon, André; Perlman, Michal; Jenkins, Jennifer
The aim of the current study was to examine the relationship between sibship size and children's vocabulary as a function of quality of sibling interactions. It was hypothesized that coming from a larger sibship (ie, 3+ children) would be related to lower receptive vocabulary in children. However, we expected this association to be moderated by the level of cognitive sensitivity shown by children's next-in-age older siblings. Data on 385 children (mean age = 3.15 years) and their next-in-age older siblings (mean age = 5.57 years) were collected and included demographic questionnaires, direct testing of children's receptive vocabulary, and videos of mother-child and sibling interactions. Sibling dyads were taped engaging in a cooperative building task and tapes were coded for the amount of cognitive sensitivity the older sibling exhibited toward the younger sibling. Hierarchical regression analyses were conducted and showed an interaction between sibship size and sibling cognitive sensitivity in the prediction of children's receptive vocabulary; children exposed to large sibships whose next-in-age older sibling exhibited higher levels of cognitive sensitivity were less likely to show low vocabulary skills when compared with those children exposed to large sibships whose siblings showed lower levels of cognitive sensitivity. Children who show sensitivity to the cognitive needs of their younger siblings provide a rich environment for language development. The negative impact of large sibships on language development is moderated by the presence of an older sibling who shows high cognitive sensitivity.
Full Text Available There is a long tradition of investigating various disorders of musical abilities after stroke. These impairments, associated with acquired amusia, can be highly selective, affecting only music perception (i.e., receptive abilities/functions or expression (music production abilities, and some patients report that these may dramatically influence their emotional state. The aim of this study was to systematically test both the melodic and rhythmic domains of music perception and expression in left- and right-sided stroke patients compared to healthy subjects. Music perception was assessed using rhythmic and melodic discrimination tasks, while tests of expressive function involved the vocal or instrumental reproduction of rhythms and melodies. Our approach revealed deficits in receptive and expressive functions in stroke patients, mediated by musical expertise. Those patients who had experienced a short period of musical training in childhood and adolescence performed better in the receptive and expressive subtests compared to those without any previous musical training. While discrimination of specific musical patterns was unimpaired after a right-sided stroke, patients with a left-sided stroke had worse results for fine melodic and rhythmic analysis. In terms of expressive testing, the most consistent results were obtained from a test that required patients to reproduce sung melodies. This implies that the means of investigating production abilities can impact the identification of deficits.
King, A J
The experiments described in this review have demonstrated that the SC contains a two-dimensional map of auditory space, which is synthesized within the brain using a combination of monaural and binaural localization cues. There is also an adaptive fusion of auditory and visual space in this midbrain nucleus, providing for a common access to the motor pathways that control orientation behaviour. This necessitates a highly plastic relationship between the visual and auditory systems, both during postnatal development and in adult life. Because of the independent mobility of difference sense organs, gating mechanisms are incorporated into the auditory representation to provide up-to-date information about the spatial orientation of the eyes and ears. The SC therefore provides a valuable model system for studying a number of important issues in brain function, including the neural coding of sound location, the co-ordination of spatial information between different sensory systems, and the integration of sensory signals with motor outputs.
Parkinson, Carolyn; Kohler, Peter Jes; Sievers, Beau; Wheatley, Thalia
Associations between auditory pitch and visual elevation are widespread in many languages, and behavioral associations have been extensively documented between height and pitch among speakers of those languages. However, it remains unclear whether perceptual correspondences between auditory pitch and visual elevation inform these linguistic associations, or merely reflect them. We probed this cross-modal mapping in members of a remote Kreung hill tribe in northeastern Cambodia who do not use spatial language to describe pitch. Participants viewed shapes rising or falling in space while hearing sounds either rising or falling in pitch, and reported on the auditory change. Associations between pitch and vertical position in the Kreung were similar to those demonstrated in populations where pitch is described in terms of spatial height. These results suggest that associations between visual elevation and auditory pitch can arise independently of language. Thus, widespread linguistic associations between pitch and elevation may reflect universally predisposed perceptual correspondences.
Zmigrod, Sharon; Hommel, Bernhard
The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.
Skoe, E; Krizman, J; Spitzer, E; Kraus, N
To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Frey, Aline; Aramaki, Mitsuko; Besson, Mireille
Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.
Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.
Klein, Barrie P.|info:eu-repo/dai/nl/36939755X; Harvey, Ben M.|info:eu-repo/dai/nl/318755319; Dumoulin, Serge O.|info:eu-repo/dai/nl/314406514
Voluntary spatial attention concentrates neural resources at the attended location. Here, we examined the effects of spatial attention on spatial position selectivity in humans. We measured population receptive fields (pRFs) using high-field functional MRI (fMRI) (7T) while subjects performed an
Ted W Cranford
Full Text Available Hearing mechanisms in baleen whales (Mysticeti are essentially unknown but their vocalization frequencies overlap with anthropogenic sound sources. Synthetic audiograms were generated for a fin whale by applying finite element modeling tools to X-ray computed tomography (CT scans. We CT scanned the head of a small fin whale (Balaenoptera physalus in a scanner designed for solid-fuel rocket motors. Our computer (finite element modeling toolkit allowed us to visualize what occurs when sounds interact with the anatomic geometry of the whale's head. Simulations reveal two mechanisms that excite both bony ear complexes, (1 the skull-vibration enabled bone conduction mechanism and (2 a pressure mechanism transmitted through soft tissues. Bone conduction is the predominant mechanism. The mass density of the bony ear complexes and their firmly embedded attachments to the skull are universal across the Mysticeti, suggesting that sound reception mechanisms are similar in all baleen whales. Interactions between incident sound waves and the skull cause deformations that induce motion in each bony ear complex, resulting in best hearing sensitivity for low-frequency sounds. This predominant low-frequency sensitivity has significant implications for assessing mysticete exposure levels to anthropogenic sounds. The din of man-made ocean noise has increased steadily over the past half century. Our results provide valuable data for U.S. regulatory agencies and concerned large-scale industrial users of the ocean environment. This study transforms our understanding of baleen whale hearing and provides a means to predict auditory sensitivity across a broad spectrum of sound frequencies.
Cranford, Ted W; Krysl, Petr
Hearing mechanisms in baleen whales (Mysticeti) are essentially unknown but their vocalization frequencies overlap with anthropogenic sound sources. Synthetic audiograms were generated for a fin whale by applying finite element modeling tools to X-ray computed tomography (CT) scans. We CT scanned the head of a small fin whale (Balaenoptera physalus) in a scanner designed for solid-fuel rocket motors. Our computer (finite element) modeling toolkit allowed us to visualize what occurs when sounds interact with the anatomic geometry of the whale's head. Simulations reveal two mechanisms that excite both bony ear complexes, (1) the skull-vibration enabled bone conduction mechanism and (2) a pressure mechanism transmitted through soft tissues. Bone conduction is the predominant mechanism. The mass density of the bony ear complexes and their firmly embedded attachments to the skull are universal across the Mysticeti, suggesting that sound reception mechanisms are similar in all baleen whales. Interactions between incident sound waves and the skull cause deformations that induce motion in each bony ear complex, resulting in best hearing sensitivity for low-frequency sounds. This predominant low-frequency sensitivity has significant implications for assessing mysticete exposure levels to anthropogenic sounds. The din of man-made ocean noise has increased steadily over the past half century. Our results provide valuable data for U.S. regulatory agencies and concerned large-scale industrial users of the ocean environment. This study transforms our understanding of baleen whale hearing and provides a means to predict auditory sensitivity across a broad spectrum of sound frequencies.
Cranford, Ted W.; Krysl, Petr
Hearing mechanisms in baleen whales (Mysticeti) are essentially unknown but their vocalization frequencies overlap with anthropogenic sound sources. Synthetic audiograms were generated for a fin whale by applying finite element modeling tools to X-ray computed tomography (CT) scans. We CT scanned the head of a small fin whale (Balaenoptera physalus) in a scanner designed for solid-fuel rocket motors. Our computer (finite element) modeling toolkit allowed us to visualize what occurs when sounds interact with the anatomic geometry of the whale’s head. Simulations reveal two mechanisms that excite both bony ear complexes, (1) the skull-vibration enabled bone conduction mechanism and (2) a pressure mechanism transmitted through soft tissues. Bone conduction is the predominant mechanism. The mass density of the bony ear complexes and their firmly embedded attachments to the skull are universal across the Mysticeti, suggesting that sound reception mechanisms are similar in all baleen whales. Interactions between incident sound waves and the skull cause deformations that induce motion in each bony ear complex, resulting in best hearing sensitivity for low-frequency sounds. This predominant low-frequency sensitivity has significant implications for assessing mysticete exposure levels to anthropogenic sounds. The din of man-made ocean noise has increased steadily over the past half century. Our results provide valuable data for U.S. regulatory agencies and concerned large-scale industrial users of the ocean environment. This study transforms our understanding of baleen whale hearing and provides a means to predict auditory sensitivity across a broad spectrum of sound frequencies. PMID:25633412
Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta
Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.
and Piercy, M. (1973). Defects of non - verbal auditory perception in children with developmental aphasia . Nature (London), 241, 468-469. Watson, C.S...LII, zS 4p ETV I Hearing and Communication Laboratory Department of Speech and Hearing Sciences 7 Indiana University Bloomington, Indiana 47405 Final...Technical Report Air Force Office of Scientific Research AFOSR-84-0337 September 1, 1984 to August 31, 1987 Hearing and Communication Laboratory
In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression.
Schwartz, Marc S; Wilkinson, Eric P
Auditory brainstem implants (ABIs), which have previously been used to restore auditory perception to deaf patients with neurofibromatosis type 2 (NF2), are now being utilized in other situations, including treatment of congenitally deaf children with cochlear malformations or cochlear nerve deficiencies. Concurrent with this expansion of indications, the number of centers placing and expressing interest in placing ABIs has proliferated. Because ABI placement involves posterior fossa craniotomy in order to access the site of implantation on the cochlear nucleus complex of the brainstem and is not without significant risk, we aim to highlight issues important in developing and maintaining successful ABI programs that would be in the best interests of patients. Especially with pediatric patients, the ultimate benefits of implantation will be known only after years of growth and development. These benefits have yet to be fully elucidated and continue to be an area of controversy. The limited number of publications in this area were reviewed. Review of the current literature was performed. Disease processes, risk/benefit analyses, degrees of evidence, and U.S. Food and Drug Administration approvals differ among various categories of patients in whom auditory brainstem implantation could be considered for use. We suggest sets of criteria necessary for the development of successful and sustaining ABI programs, including programs for NF2 patients, postlingually deafened adult nonneurofibromatosis type 2 patients, and congenitally deaf pediatric patients. Laryngoscope, 127:1909-1915, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako
The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.
Carrasco, Andres; Lomber, Stephen G
Sensory information is encoded by cortical neurons in the form of synaptic discharge time and rate level. These neuronal codes generate response patterns across cell assemblies that are crucial to various cognitive functions. Despite pivotal information about structural and cognitive factors involved in the generation of synchronous neuronal responses such as stimulus context, attention, age, cortical depth, sensory experience, and receptive field properties, the influence of cortico-cortical connectivity on the emergence of neuronal response patterns is poorly understood. The present investigation assesses the role of cortico-cortical connectivity in the modulation of neuronal discharge synchrony across auditory cortex cell-assemblies. Acute single-unit recording techniques in combination with reversible cooling deactivation procedures were used in the domestic cat (Felis catus). Recording electrodes were positioned across primary and non-primary auditory fields and neuronal activity was measured before, during, and after synaptic deactivation of adjacent cortical regions in the presence of acoustic stimulation. Cross-correlation functions of simultaneously recorded units were generated and changes in response synchrony levels across cooling conditions were measured. Data analyses revealed significant decreases in response time coincidences between cortical neurons during periods of cortical deactivation. Collectively, the results of the present investigation demonstrate that cortical neurons participate in the modulation of response synchrony levels across neuronal assemblies of primary and non-primary auditory fields. Copyright © 2013 Elsevier B.V. All rights reserved.
Engineer, Crystal T; Shetake, Jai A; Engineer, Navzer D; Vrana, Will A; Wolf, Jordan T; Kilgard, Michael P
Many individuals with language learning impairments exhibit temporal processing deficits and degraded neural responses to speech sounds. Auditory training can improve both the neural and behavioral deficits, though significant deficits remain. Recent evidence suggests that vagus nerve stimulation (VNS) paired with rehabilitative therapies enhances both cortical plasticity and recovery of normal function. We predicted that pairing VNS with rapid tone trains would enhance the primary auditory cortex (A1) response to unpaired novel speech sounds. VNS was paired with tone trains 300 times per day for 20 days in adult rats. Responses to isolated speech sounds, compressed speech sounds, word sequences, and compressed word sequences were recorded in A1 following the completion of VNS-tone train pairing. Pairing VNS with rapid tone trains resulted in stronger, faster, and more discriminable A1 responses to speech sounds presented at conversational rates. This study extends previous findings by documenting that VNS paired with rapid tone trains altered the neural response to novel unpaired speech sounds. Future studies are necessary to determine whether pairing VNS with appropriate auditory stimuli could potentially be used to improve both neural responses to speech sounds and speech perception in individuals with receptive language disorders. Copyright © 2017 Elsevier Inc. All rights reserved.
Reches, Amit; Gutfreund, Yoram
A common visual pathway in all amniotes is the tectofugal pathway connecting the optic tectum with the forebrain. The tectofugal pathway has been suggested to be involved in tasks such as orienting and attention, tasks that may benefit from integrating information across senses. Nevertheless, previous research has characterized the tectofugal pathway as strictly visual. Here we recorded from two stations along the tectofugal pathway of the barn owl: the thalamic nucleus rotundus (nRt) and the forebrain entopallium (E). We report that neurons in E and nRt respond to auditory stimuli as well as to visual stimuli. Visual tuning to the horizontal position of the stimulus and auditory tuning to the corresponding spatial cue (interaural time difference) were generally broad, covering a large portion of the contralateral space. Responses to spatiotemporally coinciding multisensory stimuli were mostly enhanced above the responses to the single modality stimuli, whereas spatially misaligned stimuli were not. Results from inactivation experiments suggest that the auditory responses in E are of tectal origin. These findings support the notion that the tectofugal pathway is involved in multisensory processing. In addition, the findings suggest that the ascending auditory information to the forebrain is not as bottlenecked through the auditory thalamus as previously thought.
Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.
Vinish Agarwal; Saurabh Varshney; Sampan Singh Bist; Sanjiv Bhagat; Sarita Mishra; Vivek Jha
Auditory neuropathy (AN)/auditory dyssynchrony (AD) is a very often missed diagnosis, hence an underdiagnosed condition in clinical practice. Auditory neuropathy is a condition in which patients, on audiologic evaluation, are found to have normal outer hair cell function and abnormal neural function at the level of the eighth nerve. These patients, on clinical testing, are found to have normal otoacoustic emissions, whereas auditory brainstem response audiometry reveals the absence of neural ...
Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter
A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.
Full Text Available Relying on the mechanism of bat’s echolocation system, a bioinspired electronic device has been developed to investigate the cortical activity of mammals in response to auditory sensorial stimuli. By means of implanted electrodes, acoustical information about the external environment generated by a biomimetic system and converted in electrical signals was delivered to anatomically selected structures of the auditory pathway. Electrocorticographic recordings showed that cerebral activity response is highly dependent on the information carried out by ultrasounds and is frequency-locked with the signal repetition rate. Frequency analysis reveals that delta and beta rhythm content increases, suggesting that sensorial information is successfully transferred and integrated. In addition, principal component analysis highlights how all the stimuli generate patterns of neural activity which can be clearly classified. The results show that brain response is modulated by echo signal features suggesting that spatial information sent by biomimetic sonar is efficiently interpreted and encoded by the auditory system. Consequently, these results give new perspective in artificial environmental perception, which could be used for developing new techniques useful in treating pathological conditions or influencing our perception of the surroundings.
Reser, D H; Fishman, Y I; Arezzo, J C; Steinschneider, M
The functional organization of primary auditory cortex in non-primates is generally modeled as a tonotopic gradient with an orthogonal representation of independently mapped binaural interaction columns along the isofrequency contours. Little information is available regarding the validity of this model in the primate brain, despite the importance of binaural cues for sound localization and auditory scene analysis. Binaural and monaural responses of A1 to pure tone stimulation were studied using auditory evoked potentials, current source density and multiunit activity. Key findings include: (i) differential distribution of binaural responses with respect to best frequency, such that 74% of the sites exhibiting binaural summation had best frequencies below 2000 Hz; (ii) the pattern of binaural responses was variable with respect to cortical depth, with binaural summation often observed in the supragranular laminae of sites showing binaural suppression in thalamorecipient laminae; and (iii) dissociation of binaural responses between the initial and sustained action potential firing of neuronal ensembles in A1. These data support earlier findings regarding the temporal and spatial complexity of responses in A1 in the awake state, and are inconsistent with a simple orthogonal arrangement of binaural interaction columns and best frequency in A1 of the awake primate.
Finneran, James J; Mulsow, Jason; Jones, Ryan; Houser, Dorian S; Accomando, Alyssa W; Ridgway, Sam H
The auditory brainstem response to a dolphin's own emitted biosonar click can be measured by averaging epochs of the instantaneous electroencephalogram (EEG) that are time-locked to the emitted click. In this study, averaged EEGs were measured using surface electrodes placed on the head in six different configurations while dolphins performed an echolocation task. Simultaneously, biosonar click emissions were measured using contact hydrophones on the melon and a hydrophone in the farfield. The averaged EEGs revealed an electrophysiological potential (the pre-auditory wave, PAW) that preceded the production of each biosonar click. The largest PAW amplitudes occurred with the non-inverting electrode just right of the midline-the apparent side of biosonar click generation-and posterior of the blowhole. Although the source of the PAW is unknown, the temporal and spatial properties rule out an auditory source. The PAW may be a neural or myogenic potential associated with click production; however, it is not known if muscles within the dolphin nasal system can be actuated at the high rates reported for dolphin click production, or if sufficiently coordinated and fast motor endplates of nasal muscles exist to produce a PAW detectable with surface electrodes.
Full Text Available Abstract Background Receptive fields of retinal neural signals of different origin can be determined from extracellular microelectrode recordings at the inner retinal surface. However, locations and types of neural processes generating the different signal components are difficult to separate and identify. We here report epiretinal receptive fields (RFs from simultaneously recorded spikes and local electroretinograms (LERGs using a semi-chronic multi-electrode in vivo recording technique in cats. Broadband recordings were filtered to yield LERG and multi unit as well as single unit spike signals. RFs were calculated from responses to multifocal pseudo-random spatiotemporal visual stimuli registered at the retinal surface by a 7-electrode array. Results LERGs exhibit spatially unimodal RFs always centered at the location of the electrode tip. Spike-RFs are either congruent with LERG-RFs (N = 26/61 or shifted distally (N = 35/61 but never proximally with respect to the optic disk. LERG-RFs appear at shorter latencies (11.9 ms ± 0.5 ms, N = 18 than those of spikes (18.6 ms ± 0.4 ms, N = 53. Furthermore, OFF-center spike-RFs precede and have shorter response rise times than ON-center spike-RFs. Our results indicate that displaced spike-RFs result from action potentials of ganglion cell axons passing the recording electrode en route to the optic disk while LERG-RFs are related to superimposed postsynaptic potentials of cells near the electrode tip. Conclusion Besides contributing to the understanding of retinal function we demonstrate the caveats that come with recordings from the retinal surface, i.e., the likelihood of recordings from mixed sets of retinal neurons. Implications for the design of an epiretinal visual implant are discussed.
Kottler, Sylvia B.
Procedures and sample activities are provided for both identifying and training children with auditory perception problems related to sound localization, sound discrimination, and sound sequencing. (KW)
.... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...
Ian T. Zajac
Full Text Available This study examined whether the broad ability general speediness (Gs could be measured via the auditory modality. Existing and purpose-developed auditory tasks that maintained the cognitive requirements of established visually presented Gs markers were completed by 96 university undergraduates. Exploratory and confirmatory factor analyses showed that the auditory tasks combined with established visual measures to define latent Gs and reaction time factors. These findings provide preliminary evidence that suggests that if auditory tasks are developed that maintain the same cognitive requirements as existing visual measures, then they are likely to index similar cognitive processes.
Pillai, Roshni; Yathiraj, Asha
The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.
Julia A Mossbridge
Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.
Jackson, Carla Wood; Schatschneider, Christopher; Leacox, Lindsey
The authors of this study described developmental trajectories and predicted kindergarten performance of Spanish and English receptive vocabulary acquisition of young Latino/a English language learners (ELLs) from socioeconomically disadvantaged migrant families. In addition, the authors examined the extent to which gender and individual initial performance in Spanish predict receptive vocabulary performance and growth rate. The authors used hierarchical linear modeling of 64 children's receptive vocabulary performance to generate growth trajectories, predict performance at school entry, and examine potential predictors of rate of growth. The timing of testing varied across children. The ELLs (prekindergarten to 2nd grade) participated in 2-5 testing sessions, each 6-12 months apart. The ELLs' average predicted standard score on an English receptive vocabulary at kindergarten was nearly 2 SDs below the mean for monolingual peers. Significant growth in the ELLs' receptive vocabulary was observed between preschool and 2nd grade, indicating that the ELLs were slowly closing the receptive vocabulary gap, although their average score remained below the standard score mean for age-matched monolingual peers. The ELLs demonstrated a significant decrease in Spanish receptive vocabulary standard scores over time. Initial Spanish receptive vocabulary was a significant predictor of growth in English receptive vocabulary. High initial Spanish receptive vocabulary was associated with greater growth in English receptive vocabulary and decelerated growth in Spanish receptive vocabulary. Gender was not a significant predictor of growth in either English or Spanish receptive vocabulary. ELLs from low socioeconomic backgrounds may be expected to perform lower in English compared with their monolingual English peers in kindergarten. Performance in Spanish at school entry may be useful in identifying children who require more intensive instructional support for English vocabulary
Ellen de Wit
Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion
.... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...
Schoof, Tim; Rosen, Stuart
Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60-72 years) and younger (19-29 years) adults with normal hearing. Speech reception thresholds (SRTs) were measured for sentences in steady-state speech-shaped noise (SS), 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM), and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analog of the SRT), and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise. SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed) although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed.
Full Text Available Normal-hearing older adults often experience increased difficulties understanding speech in noise. In addition, they benefit less from amplitude fluctuations in the masker. These difficulties may be attributed to an age-related auditory temporal processing deficit. However, a decline in cognitive processing likely also plays an important role. This study examined the relative contribution of declines in both auditory and cognitive processing to the speech in noise performance in older adults. Participants included older (60 - 72 yrs and younger (19 - 29 yrs adults with normal hearing. Speech reception thresholds (SRTs were measured for sentences in steady-state speech-shaped noise (SS, 10-Hz sinusoidally amplitude-modulated speech-shaped noise (AM, and two-talker babble. In addition, auditory temporal processing abilities were assessed by measuring thresholds for gap, amplitude-modulation, and frequency-modulation detection. Measures of processing speed, attention, working memory, Text Reception Threshold (a visual analogue of the SRT, and reading ability were also obtained. Of primary interest was the extent to which the various measures correlate with listeners' abilities to perceive speech in noise.SRTs were significantly worse for older adults in the presence of two-talker babble but not SS and AM noise. In addition, older adults showed some cognitive processing declines (working memory and processing speed although no declines in auditory temporal processing. However, working memory and processing speed did not correlate significantly with SRTs in babble. Despite declines in cognitive processing, normal-hearing older adults do not necessarily have problems understanding speech in noise as SRTs in SS and AM noise did not differ significantly between the two groups. Moreover, while older adults had higher SRTs in two-talker babble, this could not be explained by age-related cognitive declines in working memory or processing speed.
Skopal-Chase, Jessica L; Pixley, John S; Torabi, Alireza; Cenariu, Mihai C; Bhat, Anupama; Thain, David S; Frederick, Nicole M; Groza, Daria M; Zanjani, Esmail D
The biologic explanation for fetal receptivity to donor engraftment and subsequent long-term tolerance following transplantation early in gestation is not known. We investigated the role fetal immune ontogeny might play in fetal transplantation tolerance in sheep. Engraftment of allogeneic and xenogeneic HSC was determined 60 days following transplantation at different time points in sheep fetal gestation. Parallel analysis of surface differentiation antigen expression on cells from lymphoid organs of timed gestational age fetal sheep was determined by flow cytometry using available reagents. An engraftment window was identified after day 52 gestation lasting until day 71 (term gestation: 145 days). This period was associated with the expression of the leukocyte common antigen CD45 on all cells in the thymus. Double-positive and single-positive CD4 and CD8 cells began appearing in the thymus just prior (day 45 gestation) to the beginning of the engraftment window, while single-positive CD4 or CD8 cells do not begin appearing in peripheral organs until late in the engraftment period, suggesting deletional mechanisms may be operative. In concert, surface IgM-positive cells express CD45 in the thymus at day 45, with a comparable delay in the appearance of IgM/CD45 cells in the periphery until late in the engraftment window. These findings support a central role for the thymus in multilineage immune cell maturation during the period of fetal transplantation receptivity. Further, they suggest that fetal engraftment receptivity is due to gestational age-dependent deletional tolerance. (c) 2009 S. Karger AG, Basel.
Paul E Micevych
Full Text Available Estradiol has profound actions on the structure and function of the nervous system. In addition to nuclear actions that directly modulate gene expression, the idea that estradiol can rapidly activate cell signaling by binding to membrane estrogen receptors (mERs has emerged. Even the regulation of sexual receptivity, an action previously thought to be completely regulated by nuclear ERs, has been shown to have a membrane-initiated estradiol signaling (MIES component. This highlighted the question of the nature of mERs. Several candidates have been proposed, ERα, ERβ, ER-X, GPR30 (G protein coupled estrogen receptor; GPER, and a receptor activated by a diphenylacrylamide compound, STX. Although each of these receptors has been shown to be active in specific assays, we present evidence for and against their participation in sexual receptivity by acting in the lordosis-regulating circuit. The initial MIES that activates the circuit is in the arcuate nucleus of the hypothalamus (ARH. Using both activation of μ-opioid receptors (MOR in the medial preoptic nucleus and lordosis behavior, we document that both ERα and the STX receptor participate in the required MIES. ERα and the STX receptor activation of cell signaling are dependent on the transactivation of type 1 metabotropic glutamate receptors (mGluR1a that augment progesterone synthesis in astrocytes and protein kinase C (PKC in ARH neurons. While estradiol-induced sexual receptivity does not depend on neuroprogesterone, proceptive behaviors do. Moreover, the ERα and the STX receptor activation of medial preoptic MORs and augmentation of lordosis were sensitive to mGluR1a blockade. These observations suggest a common mechanism through which mERs are coupled to intracellular signaling cascades, not just in regulating reproduction, but in actions throughout the neuraxis including the cortex, hippocampus, striatum and DRGs.
Daliri, Ayoub; Max, Ludo
Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre
Jefferson Cleiton de Souza
The aim of this paper is to discuss how Hans Robert Jauss, the creator of the aesthetics of reception, has introduced the category of the reader into the literary studies especially when it comes to the importance of the reader to the understanding of the text, and to the history of a society and its literary system or, in other words to the way the formal elements of a literary work are organized and how they are related to aesthetic, ethic and moral evaluations. To do so, it is necessary to...
In this essay a mediated video game reception of the game Life Is Strange is made, with the purpose of examining the players' meaning-making processes from a gender perspective. The materials of this essay consist of videos from six different YouTube channels where each player film themselves whilst playing through Life Is Strange as a way to review and share the gaming experience. The results show how the meaning-making processes are littered with gender discourses and affects. The affects o...
This podcast is an overview of resources from the Clinician Outreach and Communication Activity (COCA) Call: Practical Tools for Radiation Emergency Preparedness. A specialist working with CDC's Radiation Studies Branch describes a web-based training tool known as a Virtual Community Reception Center (vCRC). Created: 2/28/2011 by National Center for Environmental Health (NCEH) Radiation Studies Branch and Emergency Risk Communication Branch (ERCB)/Joint Information Center (JIC); Office of Public Health Preparedness and Response (OPHPR). Date Released: 2/28/2011.
Bech, Miklós; Homberg, Uwe; Pfeiffer, Keram
Many animals, including insects, are able to use celestial cues as a reference for spatial orientation and long-distance navigation . In addition to direct sunlight, the chromatic gradient of the sky and its polarization pattern are suited to serve as orientation cues [2-5]. Atmospheric scattering of sunlight causes a regular pattern of E vectors in the sky, which are arranged along concentric circles around the sun [5, 6]. Although certain insects rely predominantly on sky polarization for spatial orientation , it has been argued that detection of celestial E vector orientation may not suffice to differentiate between solar and antisolar directions [8, 9]. We show here that polarization-sensitive (POL) neurons in the brain of the desert locust Schistocerca gregaria can overcome this ambiguity. Extracellular recordings from POL units in the central complex and lateral accessory lobes revealed E vector tunings arranged in concentric circles within large receptive fields, matching the sky polarization pattern at certain solar positions. Modeling of neuronal responses under an idealized sky polarization pattern (Rayleigh sky) suggests that these "matched filter" properties allow locusts to unambiguously determine the solar azimuth by relying solely on the sky polarization pattern for compass navigation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Föcker, J.; Hötting, K.; Gondan, Matthias
spatial attention is based on modality specific or supramodal representations of space. Auditory and visual stimuli were presented from five speaker locations positioned in the right hemifield. Participants had to attend to the innermost or outmost right position in order to detect either visual...
London, Sam; Bishop, Christopher W.; Miller, Lee M.
Communication and navigation in real environments rely heavily on the ability to distinguish objects in acoustic space. However, auditory spatial information is often corrupted by conflicting cues and noise such as acoustic reflections. Fortunately the brain can apply mechanisms at multiple levels to emphasize target information and mitigate such…
The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.
Chen, Yu-Han; Edgar, J Christopher; Huang, Mingxiong; Hunter, Michael A; Epstein, Emerson; Howell, Breannan; Lu, Brett Y; Bustillo, Juan; Miller, Gregory A; Cañive, José M
Although magnetoencephalography (MEG) studies show superior temporal gyrus (STG) auditory processing abnormalities in schizophrenia at 50 and 100 ms, EEG and corticography studies suggest involvement of additional brain areas (e.g., frontal areas) during this interval. Study goals were to identify 30 to 130 ms auditory encoding processes in schizophrenia (SZ) and healthy controls (HC) and group differences throughout the cortex. The standard paired-click task was administered to 19 SZ and 21 HC subjects during MEG recording. Vector-based Spatial-temporal Analysis using L1-minimum-norm (VESTAL) provided 4D maps of activity from 30 to 130 ms. Within-group t-tests compared post-stimulus 50 ms and 100 ms activity to baseline. Between-group t-tests examined 50 and 100 ms group differences. Bilateral 50 and 100 ms STG activity was observed in both groups. HC had stronger bilateral 50 and 100 ms STG activity than SZ. In addition to the STG group difference, non-STG activity was also observed in both groups. For example, whereas HC had stronger left and right inferior frontal gyrus activity than SZ, SZ had stronger right superior frontal gyrus and left supramarginal gyrus activity than HC. Less STG activity was observed in SZ than HC, indicating encoding problems in SZ. Yet auditory encoding abnormalities are not specific to STG, as group differences were observed in frontal and SMG areas. Thus, present findings indicate that individuals with SZ show abnormalities in multiple nodes of a concurrently activated auditory network.
Anderson, David J
Microelectrode arrays offer the auditory systems physiologists many opportunities through a number of electrode technologies. In particular, silicon substrate electrode arrays offer a large design space including choice of layout plan, range of surface areas for active sites, a choice of site materials and high spatial resolution. Further, most designs can double as recording and stimulation electrodes in the same preparation. Scala tympani auditory prosthesis research has been aided by mapping electrodes in the cortex and the inferior colliculus to assess the CNS responses to peripheral stimulation. More recently silicon stimulation electrodes placed in the auditory nerve, cochlear nucleus and the inferior colliculus have advanced the exploration of alternative stimulation sites for auditory prostheses. Multiplication of results from experimental effort by simultaneously stimulating several locations, or by acquiring several streams of data synchronized to the same stimulation event, is a commonly sought after advantage. Examples of inherently multichannel functions which are not possible with single electrode sites include (1) current steering resulting in more focused stimulation, (2) improved signal-to-noise ratio (SNR) for recording when noise and/or neural signals appear on more than one site and (3) current source density (CSD) measurements. Still more powerful are methods that exploit closely-spaced recording and stimulation sites to improve detailed interrogation of the surrounding neural domain. Here, we discuss thin-film recording/stimulation arrays on silicon substrates. These electrode arrays have been shown to be valuable because of their precision coupled with reproducibility in an ever expanding design space. The shape of the electrode substrate can be customized to accommodate use in cortical, deep and peripheral neural structures while flexible cables, fluid delivery and novel coatings have been added to broaden their application. The use of
Choisel, Sylvain; Wickelmaier, Florian Maria
Sound reproduced by multichannel systems is affected by many factors giving rise to various sensations, or auditory attributes. Relating specific attributes to overall preference and to physical measures of the sound field provides valuable information for a better understanding of the parameters...... within and between musical program materials, allowing for a careful generalization regarding the perception of spatial audio reproduction. Finally, a set of objective measures is derived from analysis of the sound field at the listening position in an attempt to predict the auditory attributes....
Gaucher, Quentin; Edeline, Jean-Marc
Many studies have described the action of Noradrenaline (NA) on the properties of cortical receptive fields, but none has assessed how NA affects the discrimination abilities of cortical cells between natural stimuli. In the present study, we compared the consequences of NA topical application on spectro-temporal receptive fields (STRFs) and responses to communication sounds in the primary auditory cortex. NA application reduced the STRFs (an effect replicated by the alpha1 agonist Phenylephrine) but did not change, on average, the responses to communication sounds. For cells exhibiting increased evoked responses during NA application, the discrimination abilities were enhanced as quantified by Mutual Information. The changes induced by NA on parameters extracted from the STRFs and from responses to communication sounds were not related. The alterations exerted by neuromodulators on neuronal selectivity have been the topic of a vast literature in the visual, somatosensory, auditory and olfactory cortices. However, very few studies have investigated to what extent the effects observed when testing these functional properties with artificial stimuli can be transferred to responses evoked by natural stimuli. Here, we tested the effect of noradrenaline (NA) application on the responses to pure tones and communication sounds in the guinea-pig primary auditory cortex. When pure tones were used to assess the spectro-temporal receptive field (STRF) of cortical cells, NA triggered a transient reduction of the STRFs in both the spectral and the temporal domain, an effect replicated by the α1 agonist phenylephrine whereas α2 and β agonists induced STRF expansion. When tested with communication sounds, NA application did not produce significant effects on the firing rate and spike timing reliability, despite the fact that α1, α2 and β agonists by themselves had significant effects on these measures. However, the cells whose evoked responses were increased by NA
Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...
Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.
Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.
Full Text Available Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R.; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training. PMID:28701989
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris played in silence. Tetris play alone did not produce any auditory or cognitive benefits. However, when alternated with FD training it enhanced learning of FD and auditory working memory. The learning-enhancing effects of Tetris play cannot be explained simply by the visual-spatial training involved, as the effects were gone when Tetris play was replaced with another visual-spatial task using Tetris-like stimuli but not incorporated into a game environment. The results indicate that game play enhances learning and transfer of the contiguous auditory experiences, pointing to a promising approach for increasing the efficiency and applicability of rehabilitative training.
Sloan, P R
The assimilation of Mendel's paper into Britain took place in an Edwardian social context. This paper concentrates on the interplay of empirical and philosophical issues in this reception. A feature of the British reception of mendelism, not duplicated elsewhere, was the role of phenomenalist philosophies of science as developed by the physicist-mathematician and scientific methodologist Karl Pearson from the philosophical positions of Austrian physicist Ernst Mach and British mathematician William Clifford. Pearson's philosophy of science forms the background to his subsequent collaboration with the zoologist W.F.R. Weldon. In this collaborative work, Pearson developed powerful statistical techniques for analyzing Weldon's empirical data on organic variation. Pearson's statistical analysis of causation and his rejection of hidden entities and causes in the explanation of evolutionary change formed the philosophical component of this program. The arguments of Pearson and Weldon were first brought to bear against the pre-Mendel 'discontinuist' analyses of variation of William Bateson. The introduction of Mendel's paper into these empirical and methodological debates consequently resulted in mathematically sophisticated attacks on Mendel's claims by Pearson and Weldon. This paper summarizes this history and argues for the creative importance of this biometrical resistance to Mendelism.
Deneve, Sophie; Gutkin, Boris
In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics. PMID:28622330
Edwards, Luke; Tumin, Anatoli
The receptivity of high-speed compressible boundary layers to kinetic fluctuations (KF) is considered within the framework of fluctuating hydrodynamics. The formulation is based on the idea that KF-induced dissipative fluxes may lead to the generation of unstable modes in the boundary layer. Fedorov and Tumin solved the receptivity problem using an asymptotic matching approach which utilized a resonant inner solution in the vicinity of the generation point of the second Mack mode. Here we take a slightly more general approach based on a multiple scales WKB ansatz which requires fewer assumptions about the behavior of the stability spectrum. The approach is modeled after the one taken by Luchini to study low speed incompressible boundary layers over a swept wing. The new framework is used to study examples of high-enthalpy, flat plate boundary layers whose spectra exhibit nuanced behavior near the generation point, such as first mode instabilities and near-neutral evolution over moderate length scales. The configurations considered exhibit supersonic unstable second Mack modes despite the temperature ratio Tw /Te > 1 , contrary to prior expectations. Supported by AFOSR and ONR.
Baum, M J; Stockman, E R; Lundell, L A
The latencies of groups of gonadectomized male and female ferrets to approach and interact with a sexually active stimulus male were measured after administration of estradiol benzoate (EB; 0, 5, 10, or 15 micrograms/kg) in adulthood. Receptive responsiveness to stud males was also assessed in these same ferrets during additional tests. Control female ferrets gonadectomized on Postnatal Day 35 displayed a dose-dependent reduction in approach latencies to the stud male which did not occur in control males castrated on Day 35. The approach latencies of males castrated on Postnatal Day 20 or Day 5 were intermediate between these two extremes. Equivalent dose-dependent reductions in approach latencies were observed in groups of ferrets ovariectomized on Day 5 and implanted sc with Silastic capsules containing either no hormone or different doses of testosterone over Postnatal Days 5-20 or 20-35. Equivalent dose-dependent increments in acceptance quotients were obtained in all groups of male and female ferrets following EB treatment. These results suggest that the capacity to display the proceptive, or appetitive, components of feminine sexual behavior is normally reduced in male ferrets as a consequence of the perinatal action of testicular hormones whereas receptive behavioral capacity is retained in males of this species.
Full Text Available In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics.
Robert, D.; Göpfert, M. C.
Evidence is presented that hearing in some insects is an active process. Audition in mosquitoes is used for mate-detection and is supported by antennal receivers, whose sound-induced vibrations are transduced by Johnston's organs. Each of these sensory organs contains ca. 15,000 sensory neurons. As shown by mechanical analysis, a physiologically vulnerable mechanism is at work that nonlinearly enhances the sensitivity and frequency selectivity of antennal hearing. This process of amplification correlates with the electrical activity of the auditory mechanoreceptor units in Johnston's organ.
Losada Rodríguez, Juan Manuel; Herrero Romero, María
Background and Aims Stigmatic receptivity plays a clear role in pollination dynamics; however, little is known about the factors that confer to a stigma the competence to be receptive for the germination of pollen grains. In this work, a developmental approach is used to evaluate the acquisition of stigmatic receptivity and its relationship with a possible change in arabinogalactan-proteins (AGPs). Methods Flowers of the domestic apple, Malus × domestica, were assessed for their capacity ...
This project report focuses on planning the traditional wedding reception for a couple who are interested in having a western reception with an African traditional touch. That is a western African wedding reception. They are both Christians and have invited their church members as well as their family members to be part of their great celebration day. Since the couples and I are members of the same church, they heard of my established business through word of mouth and assign my wedding p...
Kover, Sara T.; McDuffie, Andrea S.; Hagerman, Randi J.; Abbeduto, Leonard
In light of evidence that receptive language may be a relative weakness for individuals with autism spectrum disorder (ASD), this study characterized receptive vocabulary profiles in boys with ASD using cross-sectional developmental trajectories relative to age, nonverbal cognition, and expressive vocabulary. Participants were 49 boys with ASD (4–11 years) and 80 typically developing boys (2–11 years). Receptive vocabulary, assessed with the Peabody Picture Vocabulary Test, was a weakness for...
Wright, C. Wayne
A system that includes a Global Positioning System (GPS) antenna and associated apparatus for keeping the antenna aimed upward has been developed for use aboard a remote-sensing-survey airplane. The purpose served by the system is to enable minimum- cycle-slip reception of GPS signals used in precise computation of the trajectory of the airplane, without having to restrict the airplane to maneuvers that increase the flight time needed to perform a survey. Cycle slip signifies loss of continuous track of the phase of a signal. Minimum-cycle-slip reception is desirable because maintaining constant track of the phase of the carrier signal from each available GPS satellite is necessary for surveying to centimeter or subcentimeter precision. Even a loss of signal for as short a time as a nanosecond can cause cycle slip. Cycle slips degrade the quality and precision of survey data acquired during a flight. The two principal causes of cycle slip are weakness of signals and multipath propagation. Heretofore, it has been standard practice to mount a GPS antenna rigidly on top of an airplane, and the radiation pattern of the antenna is typically hemispherical, so that all GPS satellites above the horizon are viewed by the antenna during level flight. When the airplane must be banked for a turn or other maneuver, the reception hemisphere becomes correspondingly tilted; hence, the antenna no longer views satellites that may still be above the Earth horizon but are now below the equatorial plane of the tilted reception hemisphere. Moreover, part of the reception hemisphere (typically, on the inside of a turn) becomes pointed toward ground, with a consequent increase in received noise and, therefore, degradation of GPS measurements. To minimize the likelihood of loss of signal and cycle slip, bank angles of remote-sensing survey airplanes have generally been limited to 10 or less, resulting in skidding or slipping uncoordinated turns. An airplane must be banked in order to make
Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.
Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…
This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…
Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim
Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…
Ryan, Ashling; Gibbon, Fiona E; O'shea, Aoife
Evidence suggests that children present with receptive language skills that are equivalent to or more advanced than expressive language skills. This profile holds true for typical and delayed language development. This study aimed to determine if such a profile existed for preschool children from an area of social deprivation and to investigate if particular language skills influence any differences found between expressive and receptive skills. Data from 187 CELF P2 UK assessments conducted on preschool children from two socially disadvantaged areas in a city in southern Ireland. A significant difference was found between Receptive Language Index (RLI) and Expressive Language Index (ELI) scores with Receptive scores found to be lower than Expressive scores. The majority (78.6%) of participants had a lower Receptive Language than Expressive score (RLI ELI), with very few (3.2%) having the same Receptive and Expressive scores (RLI = ELI). Scores for the Concepts and Following Directions (receptive) sub-test were significantly lower than for the other receptive sub tests, while scores for the Expressive Vocabulary sub-test were significantly higher than for the other expressive sub tests. The finding of more advanced expressive than receptive language skills in socially deprived preschool children is previously unreported and clinically relevant for speech-language pathologists in identifying the needs of this population.
Thawin, Cheamchit; Kanchanalarp, Chanida; Lertsukprasert, Krisna; Cheewaruangroj, Wichit; Khantapasuantara, Kanjalak; Ruencharoen, Suwimol
To assess the categories of auditory performance in prelingual deaf children after implantation. Prospective study The present study consisted of one boy and four girls aged between 2 and 5 years old at the time of implantation. All subjects had bilateral profound sensorineural hearing loss and received no substantial benefit from amplification. Three subjects were implanted with Med-El combi 40+ with CIS strategy and two subjects received multichanal monopolar Nucleus 24 cochlear implant with ACE strategy. After implantation, all subjects undertook a program ofhabilitation at the Speech and Hearing Clinic Ramathibodi Hospital. The Categories of Auditory Performance (CAP) score was determined at regular intervals prior to implantation, immediately at the initial mapping (0) and 3, 6, 12 and 18 months after the implantation. The results showed that before implantation, only three children showed awareness of environment sounds, CAP score level 1, and that immediately after mapping, all of the children demonstrated awareness of the environmental sounds. Moreover, two of these children showed awareness of speech sounds, CAP score level 2. The CAP scores were gradually increased over a 12-month period. At the 12-month assessment interval, four children could discriminate two speech sounds, CAP score level 4 and one child understood phrases without lip reading, CAP score level 5. 18 months after of implantation, the CAP score for four children increased to level 5. One child understood conversation without lip reading with a familiar talker, CAP score level 6. Furthermore, children with congenital hearing loss who underwent implantation at a younger age received more benefit from the implantation. The CAP score was found to be a useful and sensitive tool to evaluate the outcome of auditory receptive abilities in young congenital deaf children who underwent cochlear implantation. The accessible outcome measurement will provide information for parents and professionals to
Full Text Available Although neural responses to sound stimuli have been thoroughly investigated in various areas of the auditory cortex, the results electrophysiological recordings cannot establish a causal link between neural activation and brain function. Electrical microstimulation, which can selectively perturb neural activity in specific parts of the nervous system, is an important tool for exploring the organization and function of brain circuitry. To date, the studies describing the behavioral effects of electrical stimulation have largely been conducted in the primary auditory cortex. In this study, to investigate the potential differences in the effects of electrical stimulation on different cortical areas, we measured the behavioral performance of cats in detecting intra-cortical microstimulation (ICMS delivered in the primary and secondary auditory fields (A1 and A2, respectively. After being trained to perform a Go/No-Go task cued by sounds, we found that cats could also learn to perform the task cued by ICMS; furthermore, the detection of the ICMS was similarly sensitive in A1 and A2. Presenting wideband noise together with ICMS substantially decreased the performance of cats in detecting ICMS in A1 and A2, consistent with a noise masking effect on the sensation elicited by the ICMS. In contrast, presenting ICMS with pure-tones in the spectral receptive field of the electrode-implanted cortical site reduced ICMS detection performance in A1 but not A2. Therefore, activation of A1 and A2 neurons may produce different qualities of sensation. Overall, our study revealed that ICMS-induced neural activity could be easily integrated into an animal’s behavioral decision process and had an implication for the development of cortical auditory prosthetics.
Reches, Amit; Netser, Shai; Gutfreund, Yoram
Neural adaptation and visual auditory integration are two well studied and common phenomena in the brain, yet little is known about the interaction between them. In the present study, we investigated a visual forebrain area in barn owls, the entopallium (E), which has been shown recently to encompass auditory responses as well. Responses of neurons to sequences of visual, auditory, and bimodal (visual and auditory together) events were analyzed. Sequences comprised two stimuli, one with a low probability of occurrence and the other with a high probability. Neurons in the E tended to respond more strongly to low probability visual stimuli than to high probability stimuli. Such a phenomenon is known as stimulus-specific adaptation (SSA) and is considered to be a neural correlate of change detection. Responses to the corresponding auditory sequences did not reveal an equivalent tendency. Interestingly, however, SSA to bimodal events was stronger than to visual events alone. This enhancement was apparent when the visual and auditory stimuli were presented from matching locations in space (congruent) but not when the bimodal stimuli were spatially incongruent. These findings suggest that the ongoing task of detecting unexpected events can benefit from the integration of visual and auditory information.
Keilmann, A; Läßig, A K; Nospes, S
The definition of an auditory processing disorder (APD) is based on impairments of auditory functions. APDs are disturbances in processes central to hearing that cannot be explained by comorbidities such as attention deficit or language comprehension disorders. Symptoms include difficulties in differentiation and identification of changes in time, structure, frequency and intensity of sounds; problems with sound localization and lateralization, as well as poor speech comprehension in adverse listening environments and dichotic situations. According to the German definition of APD (as opposed to central auditory processing disorder, CAPD), peripheral hearing loss or cognitive impairment also exclude APD. The diagnostic methodology comprises auditory function tests and the required diagnosis of exclusion. APD is diagnosed if a patient's performance is two standard deviations below the normal mean in at least two areas of auditory processing. The treatment approach for an APD depends on the patient's particular deficits. Training, compensatory strategies and improvement of the listening conditions can all be effective.
Maier, Joost X; Ghazanfar, Asif A
Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.
provides empirical evidence that there are advantages to using binaural displays that spatially separate messages between the ears. These types of...maintain the two-source limitation. The angle of separation in the older binaural displays may not be optimal for maximizing word intelligibility, but...membrane, which cause numerous fibers protruding from auditory hair cells to bend. This process activates electrical action potential in auditory neurons
Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas
The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428
Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D
The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.
Hoover, Eric C; Souza, Pamela E; Gallun, Frederick J
Auditory complaints following mild traumatic brain injury (MTBI) are common, but few studies have addressed the role of auditory temporal processing in speech recognition complaints. In this study, deficits understanding speech in a background of speech noise following MTBI were evaluated with the goal of comparing the relative contributions of auditory and nonauditory factors. A matched-groups design was used in which a group of listeners with a history of MTBI were compared to a group matched in age and pure-tone thresholds, as well as a control group of young listeners with normal hearing (YNH). Of the 33 listeners who participated in the study, 13 were included in the MTBI group (mean age = 46.7 yr), 11 in the Matched group (mean age = 49 yr), and 9 in the YNH group (mean age = 20.8 yr). Speech-in-noise deficits were evaluated using subjective measures as well as monaural word (Words-in-Noise test) and sentence (Quick Speech-in-Noise test) tasks, and a binaural spatial release task. Performance on these measures was compared to psychophysical tasks that evaluate monaural and binaural temporal fine-structure tasks and spectral resolution. Cognitive measures of attention, processing speed, and working memory were evaluated as possible causes of differences between MTBI and Matched groups that might contribute to speech-in-noise perception deficits. A high proportion of listeners in the MTBI group reported difficulty understanding speech in noise (84%) compared to the Matched group (9.1%), and listeners who reported difficulty were more likely to have abnormal results on objective measures of speech in noise. No significant group differences were found between the MTBI and Matched listeners on any of the measures reported, but the number of abnormal tests differed across groups. Regression analysis revealed that a combination of auditory and auditory processing factors contributed to monaural speech-in-noise scores, but the benefit of spatial separation was
Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa
A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.
Hasegawa, Naoya; Takeda, Kenta; Sakuma, Moe; Mani, Hiroki; Maejima, Hiroshi; Asaka, Tadayoshi
Augmented sensory biofeedback (BF) for postural control is widely used to improve postural stability. However, the effective sensory information in BF systems of motor learning for postural control is still unknown. The purpose of this study was to investigate the learning effects of visual versus auditory BF training in dynamic postural control. Eighteen healthy young adults were randomly divided into two groups (visual BF and auditory BF). In test sessions, participants were asked to bring the real-time center of pressure (COP) in line with a hidden target by body sway in the sagittal plane. The target moved in seven cycles of sine curves at 0.23Hz in the vertical direction on a monitor. In training sessions, the visual and auditory BF groups were required to change the magnitude of a visual circle and a sound, respectively, according to the distance between the COP and target in order to reach the target. The perceptual magnitudes of visual and auditory BF were equalized according to Stevens' power law. At the retention test, the auditory but not visual BF group demonstrated decreased postural performance errors in both the spatial and temporal parameters under the no-feedback condition. These findings suggest that visual BF increases the dependence on visual information to control postural performance, while auditory BF may enhance the integration of the proprioceptive sensory system, which contributes to motor learning without BF. These results suggest that auditory BF training improves motor learning of dynamic postural control. Copyright © 2017 Elsevier B.V. All rights reserved.
Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory
The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.
David L Woods
Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses
Koohi, Nehzat; Vickers, Deborah; Chandrashekar, Hoskote; Tsang, Benjamin; Werring, David; Bamiou, Doris-Eva
Auditory disability due to impaired auditory processing (AP) despite normal pure-tone thresholds is common after stroke, and it leads to isolation, reduced quality of life and physical decline. There are currently no proven remedial interventions for AP deficits in stroke patients. This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. Fifty stroke patients had baseline audiological assessments, AP tests and completed the (modified) Amsterdam Inventory for Auditory Disability and Hearing Handicap Inventory for Elderly questionnaires. Nine out of these 50 patients were diagnosed with disordered AP based on severe deficits in understanding speech in background noise but with normal pure-tone thresholds. These nine patients underwent spatial speech-in-noise testing in a sound-attenuating chamber (the "crescent of sound") with and without FM systems. The signal-to-noise ratio (SNR) for 50% correct speech recognition performance was measured with speech presented from 0° azimuth and competing babble from ±90° azimuth. Spatial release from masking (SRM) was defined as the difference between SNRs measured with co-located speech and babble and SNRs measured with spatially separated speech and babble. The SRM significantly improved when babble was spatially separated from target speech, while the patients had the FM systems in their ears compared to without the FM systems. Personal FM systems may substantially improve speech-in-noise deficits in stroke patients who are not eligible for conventional hearing aids. FMs are feasible in stroke patients and show promise to address impaired AP after stroke. Implications for Rehabilitation This is the first study to investigate the benefits of personal frequency-modulated (FM) systems in stroke patients with disordered AP. All cases significantly improved speech perception in noise with the FM systems, when noise was spatially separated from the
Zhong, Xuan; Yost, William A
Maintaining balance is known to be a multisensory process that uses information from different sensory organs. Although it has been known for a long time that spatial hearing cues provide humans with moderately accurate abilities to localize sound sources, how the auditory system interacts with balance mediated by the vestibular system remains largely a mystery. The primary goal of the current study was to determine whether auditory spatial cues obtained from a fixed sound source can help human participants balance themselves as compared to conditions in which participants use vision. The experiment uses modified versions of conventional clinical tests: the Tandem Romberg test and the Fukuda Stepping test. In the Tandem Romberg test, participants stand with their feet in a heel-to-toe position, and try to maintain balance for 40 sec. In the Fukuda Stepping test, a participant is asked to close his or her eyes and to march in place for 100 steps. The sway and angular deviation of each participant was measured with and without vision and spatial auditory cues. An auditory spatial reference was provided by presenting a broadband noise source from a loudspeaker directly in front of the participant located 1-2 m away. A total of 19 participants (11 women and 8 men; mean age = 27 yr; age range = 18∼52 yr), voluntarily participated in the experiment. All participants had normal vision, hearing, and vestibular function. The primary intervention was the use of a broadband noise source to provide an auditory spatial referent for balance measurements in the Tandem Romberg test and Fukuda Stepping test. Conditions were also tested in which the participants had their eyes opened or closed. A head tracker recorded the position of the participant's head for the Tandem Romberg test. The angular deviation of the feet after 100 steps was measured in the Fukuda Stepping test. An average distance or angle moved by the head or feet was calculated relative to the head or feet resting
Etchemendy, Pablo E; Spiousas, Ignacio; Calcagno, Esteban R; Abregú, Ezequiel; Eguia, Manuel C; Vergara, Ramiro O
In this study we evaluated whether a method of direct location is an appropriate response method for measuring auditory distance perception of far-field sound sources. We designed an experimental set-up that allows participants to indicate the distance at which they perceive the sound source by moving a visual marker. We termed this method Cross-Modal Direct Location (CMDL) since the response procedure involves the visual modality while the stimulus is presented through the auditory modality. Three experiments were conducted with sound sources located from 1 to 6 m. The first one compared the perceived distances obtained using either the CMDL device or verbal report (VR), which is the response method more frequently used for reporting auditory distance in the far field, and found differences on response compression and bias. In Experiment 2, participants reported visual distance estimates to the visual marker that were found highly accurate. Then, we asked the same group of participants to report VR estimates of auditory distance and found that the spatial visual information, obtained from the previous task, did not influence their reports. Finally, Experiment 3 compared the same responses that Experiment 1 but interleaving the methods, showing a weak, but complex, mutual influence. However, the estimates obtained with each method remained statistically different. Our results show that the auditory distance psychophysical functions obtained with the CMDL method are less susceptible to previously reported underestimation for distances over 2 m.
This article describes three management approaches that can be used with children with auditory processing difficulties and learning disabilities. These approaches were selected because they can be applied in a variety of settings by a variety of professionals, as well as interested parents. The vocabulary building procedure is one that potentially can increase the ability to learn new words but also can provide training on contextual derivation of information, which is key to auditory closure processes. This procedure also helps increase language base, which can also enhance closure abilities. Auditory memory enhancement is a simple technique that involves many complex brain processes. This procedure reduces detailed information to a more gestalt representation and also integrates the motor and spatial processes of the brain. This, in turn, more fully uses working memory and helps in formulization and recall of important concepts of the sensory input. Finally, several informal auditory training techniques are discussed that can be readily employed in the school or home setting. These auditory training techniques are those that are most relevant to the kinds of deficits most often observed in our clinic.
Hopkins, William D; Keebaugh, Alaine C; Reamer, Lisa A
Despite their genetic similarity to humans, our understanding of the role of genes on cognitive traits in chimpanzees remains virtually unexplored. Here, we examined the relationship between genetic variation in the arginine vasopressin V1a receptor gene (AVPR1A) and social cognition in chimpanze....... The collective findings show that AVPR1A polymorphisms are associated with individual differences in performance on a receptive joint attention task in chimpanzees.......Despite their genetic similarity to humans, our understanding of the role of genes on cognitive traits in chimpanzees remains virtually unexplored. Here, we examined the relationship between genetic variation in the arginine vasopressin V1a receptor gene (AVPR1A) and social cognition in chimpanzees...
Blunck, Henrik; Kjærgaard, Mikkel Baun; Toftegaard, Thomas Skjødeberg
Positioning using GPS receivers is a primary sensing modality in many areas of pervasive computing. However, previous work has not considered how people’s body impacts the availability and accuracy of GPS positioning and for means to sense such impacts. We present results that the GPS performance...... signal statistics. To help both users as well as application systems in understanding and mitigating body and environment-induced effects, we propose a method for sensing the current sources of GPS reception impairment in terms of body, urban and indoor conditions. We present results that show...... degradation on modern smart phones for different hand grip styles and body placements can cause signal strength drops as high as 10-16 dB and double the positioning error. Furthermore, existing phone applications designed to help users identify sources of GPS performance impairment are restricted to show raw...
Full Text Available Calvino's works arrived in China surprisingly early. The first translated texts date back to the Fifties, when the writer, in his early thirties, was at the beginning of his career. The Chinese version is not a direct translation of the original texts of Calvino, but an indirect translation. This surprisingly early interest is followed by a long period of silence that goes from the second half of the Sixties until the end of the Seventies. The article will try to analyze the first translations of Calvino’s works published in China and to outline the reception of Calvino’s works in modern China in the last several decades. The contextual analysis is aimed to see the relationship between the translations and the historical and ideological context in which they were conceived.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.
A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.
Hou, Yanlian; Xiao, Xiaoyan; Ren, Jianmin; Wang, Yajuan; Zhao, Faming
More attention has recently been focused on auditory impairment of young type 1 diabetics. This study aimed to evaluate auditory function of young type 1 diabetics and the correlation between clinical indexes and hearing impairment. We evaluated the auditory function of 50 type 1 diabetics and 50 healthy subjects. Clinical indexes were measured along with analyzing their relation of auditory function. Type 1 diabetic patients demonstrated a deficit with elevated thresholds at right ear and left ear when compared to healthy controls (p p V and interwave I-V) and left ear (wave III, V and interwave I-III, I-V) in diabetic group significantly increased compared to those in control subjects (p p p p p <0.01). Type 1 diabetics exerted higher auditory threshold, slower auditory conduction time and cochlear impairment. HDL-cholesterol, diabetes duration, systemic blood pressure, microalbuminuria, GHbA1C, triglyceride, and age may affect the auditory function of type 1 diabetics. Copyright © 2015 IMSS. Published by Elsevier Inc. All rights reserved.
Gutschalk, Alexander; Dykstra, Andrew R
Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.
Faraj, Avan Kamal Aziz
Vocabulary acquisition has been a main concern of EFL English teachers and learners. There have been tons of research to examine the student's level of receptive vocabulary and productive vocabulary, but no research has conducted on how turning receptive vocabulary into productive vocabulary. This study has reported the impact of the teaching…
Full Text Available The new Regulation regarding the reception of construction works and corresponding installations, approved by Government's Decision no. 347/2017 (“Regulation 2017” has general applicability for all construction works for which there is an obligation to obtain a building permit. Regulation 2017 brings significant changes and clarifications expected by the real estate sector regarding: (i the composition of the commissions involved in the reception procedure, (ii the role of the site supervisor who thus gains significant participation in the reception procedure, and (iii the participation of the public authorities' representatives at the reception, having the veto right on the decision of the reception commission upon the completion of the construction works. Another element of novelty brought by Regulation 2017 is the possibility to do the reception upon the completion of the construction works, respectively the final reception for parts / objectives / sectors of or from the building, if they are distinct/ independent from a physical and functional point of view. Thus, the new regulation facilitates the procedure of authorizing investment objectives and the costs of the process. The partial reception is another innovation brought by the Regulation 2017 in support of the investor, who can thus take over a part of the construction, at a certain stage, and obtain its registration with the Land Book.
Sanzol, Javier; Rallo, Pilar; Herrero, María
While stigma anatomy is well documented for a good number of species, little information is available on the acquisition and cessation of stigmatic receptivity. The aim of this work is to characterize the development of stigma receptivity, from anthesis to stigma degeneration, in the pentacarpellar pear (Pyrus communis) flower. Stigma development and stigmatic receptivity were monitored over two consecutive years, as the capacity of the stigmas to offer support for pollen germination and pollen tube growth. In an experiment where hand pollinations were delayed for specified times after anthesis, three different stigmatic developmental stages could be observed: (1) immature stigmas, which allow pollen adhesion but not hydration; (2) receptive stigmas, which allow proper pollen hydration and germination; and (3) degenerated stigmas, in which pollen hydrates and germinates properly, but pollen tube growth is impaired soon after germination. This developmental characterization showed that stigmas in different developmental stages coexist within a flower and that the acquisition and cessation of stigmatic receptivity by each carpel occur in a sequential manner. In this way, while the duration of stigmatic receptivity for each carpel is rather short, the flower has an expanded receptive period. This asynchronous period of receptivity for the different stigmas of a single flower is discussed as a strategy that could serve to maximize pollination resources under unreliable pollination conditions.
Shtarkov, Y. M.
Some possible criteria for estimating the effectiveness of correcting codes are presented, and the energy effectiveness of correcting codes is studied for symbol-by-symbol reception. Expressions for the energetic effectiveness of binary correcting codes for reception in the whole are produced. Asymptotic energetic effectiveness and finite signal/noise ratio cases are considered.
Wunderlich, Kara L.; Vollmer, Timothy R.
The current study compared the use of serial and concurrent methods to train multiple exemplars when teaching receptive language skills, providing a systematic replication of Wunderlich, Vollmer, Donaldson, and Phillips (2014). Five preschoolers diagnosed with developmental delays or autism spectrum disorders were taught to receptively identify…
Reijneveld, S.A.; Boer, J.B.de; Bean, T.; Korfker, D.G.
We assessed the effects of a stringent reception policy on the mental health of unaccompanied adolescent asylum seekers by comparing the mental health of adolescents in a restricted campus reception setting and in a setting offering more autonomy (numbers [response rates]: 69 [93%] and 53 [69%],
En kort undersøgelse af et problem i den tidligere reception af Aristoteles' Analytica Posteriora i forbindelse med de første latinske oversættelser.......En kort undersøgelse af et problem i den tidligere reception af Aristoteles' Analytica Posteriora i forbindelse med de første latinske oversættelser....
Leahy, Matthew M.; Jouriles, Ernest N.; Walters, Scott T.
This project examined the reliability and validity of a newly developed measure of college students' receptiveness to alcohol related information and advice. Participants were 116 college students who reported having consumed alcohol at some point in their lifetime. Participants completed a measure of receptiveness to alcohol-related…
Kover, Sara T.; McDuffie, Andrea S.; Hagerman, Randi J.; Abbeduto, Leonard
In light of evidence that receptive language may be a relative weakness for individuals with autism spectrum disorder (ASD), this study characterized receptive vocabulary profiles in boys with ASD using cross-sectional developmental trajectories relative to age, nonverbal cognition, and expressive vocabulary. Participants were 49 boys with ASD…
Mohamed, Shafizan; Wok, Saodah; Lahabou, Mahaman
In 2011, a study was conducted to look at students' reception of IIUM.FM, a newly launched online campus radio. Using the Technological Acceptance Model (TAM), the study found that factors such as perceived ease of use, perceived usefulness, and attitude highly influenced audience reception of the online radio. In 2016, a corresponding study,…
Harlaar, Nicole; Meaburn, Emma L.; Hayiou-Thomas, Marianna E.; Davis, Oliver S. P.; Docherty, Sophia; Hanscombe, Ken B.; Haworth, Claire M. A.; Price, Thomas S.; Trzaskowski, Maciej; Dale, Philip S.; Plomin, Robert
Purpose: Researchers have previously shown that individual differences in measures of receptive language ability at age 12 are highly heritable. In the current study, the authors attempted to identify some of the genes responsible for the heritability of receptive language ability using a "genome-wide association" approach. Method: The…
Lora, Jorge; Herrero, Maria; Hormaza, Jose I
A variety of mechanisms to prevent inbreeding have arisen in different angiosperm taxa during plant evolution. In early-divergent angiosperms, a widespread system is dichogamy, in which female and male structures do not mature simultaneously, thus encouraging cross pollination. While this system is common in early-divergent angiosperms, it is less widespread in more recently evolved clades. An evaluation of the consequences of this system on outbreeding may provide clues on this change, but this subject has been little explored. In this work, we characterized the cycle and anatomy of the flower and studied the influence of temperature and humidity on stigmatic receptivity in Annona cherimola, a member of an early-divergent angiosperm clade with protogynous dichogamy. Paternity analysis reveals a high proportion of seeds resulting from self-fertilization, indicating that self-pollination can occur in spite of the dichogamous system. Stigmatic receptivity is environmentally modulated--shortened by high temperatures and prolonged by high humidity. Although spatial and temporal sexual separation in this system seems to effectively decrease selfing, the system is modulated by environmental conditions and may allow high levels of selfing that can guarantee reproductive assurance.
Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.
Bao, Shimin; Sweatt, Kristin T; Lechago, Sarah A; Antal, Sarah
Many Early Intensive Behavioral Intervention (EIBI) curricula recommend teaching receptive responding before targeting expressive responding (Leaf & McEachin, 1999; Lovaas, 2003). However, a small literature base suggests that teaching expressive responses first may be more efficient when teaching children with ASD and other developmental disabilities (Petursdottir & Carr, 2011). The present study employed an alternating treatments design to compare the effects of three instructional sequences to teach feature, function, and class to three children diagnosed with ASD: (a) receptive-expressive, (b) expressive-receptive, and (c) mixed. The results suggested that expressive-receptive was the most efficient training sequence for all three participants. Additionally, greater emergent responding was observed with the expressive-receptive training sequence. © 2017 Society for the Experimental Analysis of Behavior.
Choudhari, Meelan; Streett, Craig L.
The process by which the boundary layer internalizes the environmental disturbances in the form of instability waves is known as the boundary-layer receptivity. The paper discusses the importance of receptivity in transition research. The receptivity scenario for three-dimensional and high-speed boundary layers is examined. It is found that, while receptivity mechanisms present in the low-speed case are also operative in these complex flows, certain uniquely 'compressible' receptivity mechanisms may come into play as well. Both numerical, and where convenient, asymptotic procedures are utilized to develop quantitative predictions of the localized generation of a variety of instability types (Tollmien-Schlichting, inflectional, higher modes, crossflow vortices) in boundary layer flows relevant to the National Aero-Space Plane (NASP).
Loxley, P N
The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.
Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.
Higgins, Nathan C; McLaughlin, Susan A; Da Costa, Sandra; Stecker, G Christopher
Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC) represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI) to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD) applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical) and onset-only (illusory) lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset-which alters the physical but not the perceptual nature of the spatial cue-did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Nathan C. Higgins
Full Text Available Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical and onset-only (illusory lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.
Ulbrich, Susanne E; Groebner, Anna E; Bauersachs, Stefan
The development of a fertilized oocyte into a differentiated multi-cellular organism is a major challenge with regard to the orchestration of the expression of the mammalian genome. Highly complex networks of genes are temporally and spatially regulated during cellular differentiation to generate specific cell types. Embryonic development is critically influenced by external impacts in the female reproductive tract. A most critical phase of pregnancy in mammals is the pre- and peri-implantation period, during which the uterine environment plays a crucial role in supporting the development of the conceptus. The analytical description of the transcriptome, proteome and metabolome of the embryo-maternal interface is a prerequisite for the understanding of the complex regulatory processes taking place during this time. This review lines out potentials and limitations of different approaches to unravel the determinants of endometrial receptivity in cattle, the pig and the horse. Suitable in vivo and in vitro models, which have been used to elucidate factors participating in the embryo-maternal dialog are discussed. Taken together, transcriptome analyses and specified selective candidate gene driven approaches contribute to the understanding of endometrial function. The endometrium as sensor and driver of fertility may indicate the qualitative and quantitative nature of signaling molecules sent by the early embryo and in turn, accordingly impact on embryonic development. Copyright © 2012 Elsevier Inc. All rights reserved.
Moore, David R.; Halliday, Lorna F.; Amitay, Sygal
This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...
Ghuntla Tejas P.; Mehta Hemant B.; Gokhale Pradnya A.; Shah Chinmay J.
Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time...
Andersen, Tobias; Tiippana, K.; Laarni, J.
integration did not change. Visual spatial attention was also able to select between the faces when lip reading. This suggests that visual spatial attention acts at the level of visual speech perception prior to audiovisual integration and that the effect propagates through audiovisual integration......Auditory and visual information is integrated when perceiving speech, as evidenced by the McGurk effect in which viewing an incongruent talking face categorically alters auditory speech perception. Audiovisual integration in speech perception has long been considered automatic and pre......-attentive but recent reports have challenged this view. Here we study the effect of visual spatial attention on the McGurk effect. By presenting a movie of two faces symmetrically displaced to each side of a central fixation point and dubbed with a single auditory speech track, we were able to discern the influences...
Nishimura, Akio; Yokosawa, Kazuhiko
In the present article, we investigated the effects of pitch height and the presented ear (laterality) of an auditory stimulus, irrelevant to the ongoing visual task, on horizontal response selection. Performance was better when the response and the stimulated ear spatially corresponded (Simon effect), and when the spatial-musical association of response codes (SMARC) correspondence was maintained-that is, right (left) response with a high-pitched (low-pitched) tone. These findings reveal an automatic activation of spatially and musically associated responses by task-irrelevant auditory accessory stimuli. Pitch height is strong enough to influence the horizontal responses despite modality differences with task target.
Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam
To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.
In his article "The Reception of Mao's 'Talks at the Yan'an Forum on Literature and Art' in English-language Scholarship" Qilin Fu examines the three waves of the reception of Mao Zedong's 1942 text...
Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.
Brian N Pasley
Full Text Available How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.
... to the inner row of hair cells or synapses between the inner hair cells and the auditory ... any other nerve-related problems. Ongoing speech and language testing . A child with ANSD needs regular visits ...
The present research proposes that the presence of auditory feedback increases satisfaction with the shopping experience, confidence in the retailer, and the likelihood to return to the retailer...
Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...
Daalman, K.; Diederen, K. M. J.; Derks, E. M.; van Lutterveld, R.; Kahn, R. S.; Sommer, Iris E. C.
Background. Hallucinations have consistently been associated with traumatic experiences during childhood. This association appears strongest between physical and sexual abuse and auditory verbal hallucinations (AVH). It remains unclear whether traumatic experiences mainly colour the content of AVH
Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.
Vitor E. Valenti
Full Text Available Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation between noise intensity and vagal-sympathetic balance. Additionally, it was reported that music therapy improved heart rate variability in anthracycline-treated breast cancer patients. It was hypothesized that dopamine release in the striatal system induced by pleasurable songs is involved in cardiac autonomic regulation. Musical auditory stimulation influences heart rate variability through a neural mechanism that is not well understood. Further studies are necessary to develop new therapies to treat cardiovascular disorders.
Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.
Maturana, Matias I; Apollo, Nicholas V; Garrett, David J; Kameneva, Tatiana; Cloherty, Shaun L; Grayden, David B; Burkitt, Anthony N; Ibbotson, Michael R; Meffin, Hamish
Implantable retinal stimulators activate surviving neurons to restore a sense of vision in people who have lost their photoreceptors through degenerative diseases. Complex spatial and temporal interactions occur in the retina during multi-electrode stimulation. Due to these complexities, most existing implants activate only a few electrodes at a time, limiting the repertoire of available stimulation patterns. Measuring the spatiotemporal interactions between electrodes and retinal cells, and incorporating them into a model may lead to improved stimulation algorithms that exploit the interactions. Here, we present a computational model that accurately predicts both the spatial and temporal nonlinear interactions of multi-electrode stimulation of rat retinal ganglion cells (RGCs). The model was verified using in vitro recordings of ON, OFF, and ON-OFF RGCs in response to subretinal multi-electrode stimulation with biphasic pulses at three stimulation frequencies (10, 20, 30 Hz). The model gives an estimate of each cell's spatiotemporal electrical receptive fields (ERFs); i.e., the pattern of stimulation leading to excitation or suppression in the neuron. All cells had excitatory ERFs and many also had suppressive sub-regions of their ERFs. We show that the nonlinearities in observed responses arise largely from activation of presynaptic interneurons. When synaptic transmission was blocked, the number of sub-regions of the ERF was reduced, usually to a single excitatory ERF. This suggests that direct cell activation can be modeled accurately by a one-dimensional model with linear interactions between electrodes, whereas indirect stimulation due to summated presynaptic responses is nonlinear.
Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona
Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert...
Zatorre, Robert J; Halpern, Andrea R
Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.
Ozdemir, Süleyman; Kıroğlu, Mete; Tuncer, Ulkü; Sahin, Rasim; Tarkan, Ozgür; Sürmelioğlu, Ozgür
The aim of this study was to analyze the auditory performance development of cochlear implanted patients. The effects of age at implantation, gender, implanted ear and model of the cochlear implant on the patients' auditory performance were investigated. Twenty-eight patients (12 boys, 16 girls) with congenital prelingual hearing loss who underwent cochlear implant surgery at our clinic and a follow-up of at least 18 months were selected for the study. Listening Progress Profile (LiP), Monosyllable-Trochee-Polysyllable (MTP) and Meaningful Auditory Integration Scale (MAIS) tests were performed to analyze the auditory performances of the patients. To determine the effect of the age at implantation on the auditory performance, patients were assigned into two groups: group 1 (implantation age = or <60 months, mean 44.8 months) and group 2 (implantation age = or <60 months, mean 100.6 months). Group 2 had higher preoperative test scores than group 1 but after cochlear implant use, the auditory performance levels of the patients in group 1 improved faster and equalized to those of the patients in group 2 after 12-18 months. Our data showed that variables such as sex, implanted ear or model of the cochlear implant did not have any statistically significant effect on the auditory performance of the patients after cochlear implantation. We found a negative correlation between the implantation age and the auditory performance improvement in our study. We observed that children implanted at young age had a quicker language development and have had more success in reading, writing and other educational skills in the future.
Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya
The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex. © EEG and Clinical Neuroscience Society (ECNS) 2014.
Ali Akbar Tahaei
Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.
Emine Merve Kaya
Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.
Fogtmann, Maiken Hillerup; Krogh, Peter; Markussen, Thomas
spatial interfaces and forms the ground for articulating a critique of spatial interfaces in general as it is the claim of the paper that spatiality as understood in architecture not has been served and taken advantage of in its totality by spatial interaction design so far....
Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf
Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. Copyright © 2014 Elsevier Inc. All rights reserved.
Eitan, Zohar; Timmers, Renee
Though auditory pitch is customarily mapped in Western cultures onto spatial verticality (high-low), both anthropological reports and cognitive studies suggest that pitch may be mapped onto a wide variety of other domains. We collected a total number of 35 pitch mappings and investigated in four experiments how these mappings are used and…
Jerger, Susan; Damian, Markus F; McAlpine, Rachel P; Abdi, Hervé
To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /-b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/-b/æz) and identified non-intact nonwords (/-b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35-45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.
Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.
Full Text Available This study explored receptive vocabulary size and early literacy skills (namely: letter naming, knowledge of phoneme-grapheme correspondences and early writing in emergent bilingual Northern Sotho-English children. Two groups of Grade 1 learners were tested in both English and in Northern Sotho. Group 1 (N = 49 received their formal schooling in English, whilst group 2 (N = 50 received their formal schooling in Northern Sotho. Receptive vocabulary was tested using the Peabody Picture Vocabulary Test. Letter knowledge was assessed by asking learners to name letter cards, whilst knowledge of phoneme-grapheme correspondences was tested by asking children to match letter cards with spoken sounds. Early writing was assessed by asking children to write their names. Statistical analyses indicated that both English and Northern Sotho receptive vocabulary knowledge had a significant effect on early literacy skills, whilst no main effect was found for the language of instruction. Group 1 performed significantly better than Group 2 in English receptive vocabulary, in knowledge of phonemegrapheme correspondences and in early writing, but no group differences were found for Northern Sotho receptive vocabulary or for letter knowledge. English receptive vocabulary significantly predicted the outcome of all of the early literacy skills, whilst Northern Sotho receptive vocabulary significantly predicted phoneme-grapheme correspondences and early writing.
McCloy, Daniel R; Lau, Bonnie K; Larson, Eric; Pratt, Katherine A I; Lee, Adrian K C
Successful speech communication often requires selective attention to a target stream amidst competing sounds, as well as the ability to switch attention among multiple interlocutors. However, auditory attention switching negatively affects both target detection accuracy and reaction time, suggesting that attention switches carry a cognitive cost. Pupillometry is one method of assessing mental effort or cognitive load. Two experiments were conducted to determine whether the effort associated with attention switches is detectable in the pupillary response. In both experiments, pupil dilation, target detection sensitivity, and reaction time were measured; the task required listeners to either maintain or switch attention between two concurrent speech streams. Secondary manipulations explored whether switch-related effort would increase when auditory streaming was harder. In experiment 1, spatially distinct stimuli were degraded by simulating reverberation (compromising across-time streaming cues), and target-masker talker gender match was also varied. In experiment 2, diotic streams separable by talker voice quality and pitch were degraded by noise vocoding, and the time alloted for mid-trial attention switching was varied. All trial manipulations had some effect on target detection sensitivity and/or reaction time; however, only the attention-switching manipulation affected the pupillary response: greater dilation was observed in trials requiring switching attention between talkers.
Zoe F Mann
Full Text Available A study of genes expressed in the developing inner ear identified the bHLH transcription factor Scleraxis (Scx in the developing cochlea. Previous work has demonstrated an essential role for Scx in the differentiation and development of tendons, ligaments and cells of chondrogenic lineage. Expression in the cochlea has been shown previously, however the functional role for Scx in the cochlea is unknown. Using a Scx-GFP reporter mouse line we examined the spatial and temporal patterns of Scx expression in the developing cochlea between embryonic day 13.5 and postnatal day 25. Embryonically, Scx is expressed broadly throughout the cochlear duct and surrounding mesenchyme and at postnatal ages becomes restricted to the inner hair cells and the interdental cells of the spiral limbus. Deletion of Scx results in hearing impairment indicated by elevated auditory brainstem response (ABR thresholds and diminished distortion product otoacoustic emission (DPOAE amplitudes, across a range of frequencies. No changes in either gross cochlear morphology or expression of the Scx target genes Col2A, Bmp4 or Sox9 were observed in Scx(-/- mutants, suggesting that the auditory defects observed in these animals may be a result of unidentified Scx-dependent processes within the cochlea.
Zhang, Yu-Xuan; Tang, Ding-Lan; Moore, David R.; Amitay, Sygal
Medical rehabilitation involving behavioral training can produce highly successful outcomes, but those successes are obtained at the cost of long periods of often tedious training, reducing compliance. By contrast, arcade-style video games can be entertaining and highly motivating. We examine here the impact of video game play on contiguous perceptual training. We alternated several periods of auditory pure-tone frequency discrimination (FD) with the popular spatial visual-motor game Tetris p...
Yalçinkaya, Fulya; Yilmaz, Suna Tokgöz; Muluk, Nuray Bayar
Transient evoked otoacoustic emissions (TEOAEs) are reflections of cochlear energy produced during the processing of sound. The suppression effect identified as the decrease with the additional tone stimulator of the otoacoustic emission amplitude is use for assessing efferent auditory system function. The aim of this study is to investigate the contralateral suppression effect (CSE) of transient evoked otoacoustic emissions (TEOAEs) in children with auditory listening problems (ALPs) compared to normal hearing children. The study group (Group 1) was consisted of 12 ALP children (8 males and 4 females), aged 5-10 years, and associated with receptive and expressive language delay. The control group was consisted of (Group 2) 12 children with normal hearing levels, matched according to gender and age of TEOAEs and CSE of TEOAEs were investigated at 1.0-4.0 kHz in both groups. For right ear, at 1.0 and 3.0 kHz, TEOAE amplitudes of the ALP group were significantly lower than the control group. At 2.0, 4.0 and 5.0 kHz of the right ear and at 1.0-5.0 kHz of the left ear, TEOAE amplitudes were found as not different between ALP and control groups. Suppression values of the ALP group were significantly lower than the control group at 1.0-2.0 kHz of the right ear and at 2.0 kHz of the left ear. At the other frequencies, there was no significant difference between the suppression values of the ALP and control groups. Lower suppression values in ALP group at all frequencies (significant at 1.0-2.0 and 2.0 kHz in the right and left ears, respectively) showed that cochlear and cranial maturation of the ALP group may lower than the control group. Since the age profile in both group is similar, we thought that age's effect on this results is not important. Our results showed that children with ALP have auditory processing difficulties in noisy environment. For understanding the efferent auditory system, patients with auditory processing disorders may be evaluated by the help of
Li, Shu-Yun; Song, Zhuo; Song, Min-Jie; Qin, Jia-Wen; Zhao, Meng-Long; Yang, Zeng-Ming
Polycystic ovary syndrome (PCOS), a complex endocrine disorder, is a leading cause of female infertility. An obvious reason for infertility in PCOS women is anovulation. However, success rate with high quality embryos selected by assisted reproduction techniques in PCOS patients still remain low with a high rate of early clinical pregnancy loss, suggesting a problem in uterine receptivity. Using a dehydroepiandrosterone-induced mouse model of PCOS, some potential causes of decreased fertility in PCOS patients were explored. In our study, ovulation problem also causes sterility in PCOS mice. After blastocysts from normal mice are transferred into uterine lumen of pseudopregnant PCOS mice, the rate of embryo implantation was reduced. In PCOS mouse uteri, the implantation-related genes are also dysregulated. Additionally, artificial decidualization is severely impaired in PCOS mice. The serum estrogen level is significantly higher in PCOS mice than vehicle control. The high level of estrogen and potentially impaired LIF-STAT3 pathway may lead to embryo implantation failure in PCOS mice. Although there are many studies about effects of PCOS on endometrium, both embryo transfer and artificial decidualization are applied to exclude the effects from ovulation and embryos in our study.
Hopkins, William D; Keebaugh, Alaine C; Reamer, Lisa A; Schaeffer, Jennifer; Schapiro, Steven J; Young, Larry J
Despite their genetic similarity to humans, our understanding of the role of genes on cognitive traits in chimpanzees remains virtually unexplored. Here, we examined the relationship between genetic variation in the arginine vasopressin V1a receptor gene (AVPR1A) and social cognition in chimpanzees. Studies have shown that chimpanzees are polymorphic for a deletion in a sequence in the 5' flanking region of the AVPR1A, DupB, which contains the variable RS3 repetitive element, which has been associated with variation in social behavior in humans. Results revealed that performance on the social cognition task was significantly heritable. Furthermore, males with one DupB(+) allele performed significantly better and were more responsive to socio-communicative cues than males homozygous for the DupB- deletion. Performance on a non-social cognition task was not associated with the AVPR1A genotype. The collective findings show that AVPR1A polymorphisms are associated with individual differences in performance on a receptive joint attention task in chimpanzees.
Sakurai, Akira; Koganezawa, Masayuki; Yasunaga, Kei-ichiro; Emoto, Kazuo; Yamamoto, Daisuke
Female Drosophila with the spinster mutation repel courting males and rarely mate. Here we show that the non-copulating phenotype can be recapitulated by the elimination of spinster functions from either spin-A or spin-D neuronal clusters, in the otherwise wild-type (spinster heterozygous) female brain. Spin-D corresponds to the olfactory projection neurons with dendrites in the antennal lobe VA1v glomerulus that is fruitless-positive, sexually dimorphic and responsive to fly odour. Spin-A is a novel local neuron cluster in the suboesophageal ganglion, which is known to process contact chemical pheromone information and copulation-related signals. A slight reduction in spinster expression to a level with a minimal effect is sufficient to shut off female sexual receptivity if the dominant-negative mechanistic target of rapamycin is simultaneously expressed, although the latter manipulation alone has only a marginal effect. We propose that spin-mediated mechanistic target of rapamycin signal transduction in these neurons is essential for females to accept the courting male.
Full Text Available Abstract Background Following stroke, patients frequently demonstrate loss of motor control and function and altered kinematic parameters of reaching movements. Feedback is an essential component of rehabilitation and auditory feedback of kinematic parameters may be a useful tool for rehabilitation of reaching movements at the impairment level. The aim of this study was to investigate the effect of 2 types of auditory feedback on the kinematics of reaching movements in hemiparetic stroke patients and to compare differences between patients with right (RHD and left hemisphere damage (LHD. Methods 10 healthy controls, 8 stroke patients with LHD and 8 with RHD were included. Patient groups had similar levels of upper limb function. Two types of auditory feedback (spatial and simple were developed and provided online during reaching movements to 9 targets in the workspace. Kinematics of the upper limb were recorded with an electromagnetic system. Kinematics were compared between groups (Mann Whitney test and the effect of auditory feedback on kinematics was tested within each patient group (Friedman test. Results In the patient groups, peak hand velocity was lower, the number of velocity peaks was higher and movements were more curved than in the healthy group. Despite having a similar clinical level, kinematics differed between LHD and RHD groups. Peak velocity was similar but LHD patients had fewer velocity peaks and less curved movements than RHD patients. The addition of auditory feedback improved the curvature index in patients with RHD and deteriorated peak velocity, the number of velocity peaks and curvature index in LHD patients. No difference between types of feedback was found in either patient group. Conclusion In stroke patients, side of lesion should be considered when examining arm reaching kinematics. Further studies are necessary to evaluate differences in responses to auditory feedback between patients with lesions in opposite
McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia
Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their
Stekelenburg, J.J.; Vroomen, J.
The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one’s own motor actions. Auditory potentials
Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha
Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Hötting, Kirsten; Röder, Brigitte
... his/her dialect. The idea that sensory deprivation leads to an improvement in the remaining senses has been discussed in philosophy and psychology for a long time (e.g. James, 1890, pp. 203ff ). On the other hand, it has been argued that vision is necessary to calibrate in particular spatial perception of the other senses ( Knudsen and Brainard, 19...
Paillard, Aurore C; Quarck, Gaëlle; Denise, Pierre
Spatial disorientation is defined as an erroneous body orientation perceived by pilots during flights. Limits of the vestibular system provoke frequent spatial disorientation mishaps. Although vestibular spatial disorientation is experienced frequently in aviation, there is no intuitive countermeasure against spatial disorientation mishaps to date. The aim of this review is to describe the current sensorial countermeasures and to examine future leads in sensorial ergonomics for vestibular spatial disorientation. This work reviews: 1) the visual ergonomics, 2) the vestibular countermeasures, 3) the auditory displays, 4) the somatosensory countermeasures, and, finally, 5) the multisensory displays. This review emphasizes the positive aspects of auditory and somatosensory countermeasures as well as multisensory devices. Even if some aspects such as sensory conflict and motion sickness need to be assessed, these countermeasures should be taken into consideration for ergonomics work in the future. However, a recent development in aviation might offer new and better perspectives: unmanned aerial vehicles. Unmanned aerial vehicles aim to go beyond the physiological boundaries of human sensorial systems and would allow for coping with spatial disorientation and motion sickness. Even if research is necessary to improve the interaction between machines and humans, this recent development might be incredibly useful for decreasing or even stopping vestibular spatial disorientation.
In this paper I discuss the relationship between two different approaches to critical theory – the reflective and the receptive approaches. I show how it can be fruitful to discuss the relationship between Habermas and Foucault through this distinction. My point is that whereas Habermas focusses...... on critique as a reflexive activity, Foucault mainly focusses on the receptive conditions for critique to be possible. I argue further that Foucault focusses on the receptive aspects of critique, the quest for universality is not as pressing as it is in Habermas’ approach, because problematizing critique can...