WorldWideScience

Sample records for human auditory filters

  1. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    Science.gov (United States)

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  2. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.

    2008-01-01

    consistent with the perceptual data obtained with the same stimuli and with results from simulations of neural activity at the output of cochlear preprocessing. These findings demonstrate that phase effects in peripheral auditory processing are accurately reflected up to the level of the auditory cortex....

  3. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    -ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...... was found to steadily decrease with decreasing center frequency. Although the observed decrease in filter bandwidth with decreasing center frequency was only approximately monotonic, the preliminary data indicates the filter bandwidth does not stabilize around 100 Hz, e.g. it still decreases below...

  4. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  6. Low Power Adder Based Auditory Filter Architecture

    Directory of Open Access Journals (Sweden)

    P. F. Khaleelur Rahiman

    2014-01-01

    Full Text Available Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  7. Estimating auditory filter bandwidth using distortion product otoacoustic emissions

    DEFF Research Database (Denmark)

    Hauen, Sigurd van; Rukjær, Andreas Harbo; Ordoñez Pizarro, Rodrigo Eduardo

    2017-01-01

    The basic frequency selectivity in the listener’s hearing is often characterized by auditory filters. These filters are determined through listening tests, which determine the masking threshold as a function of frequency of the tone and the bandwidth of the masking sound. The auditory filters hav...

  8. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  9. Rapid estimation of high-parameter auditory-filter shapes

    Science.gov (United States)

    Shen, Yi; Sivakumar, Rajeswari; Richards, Virginia M.

    2014-01-01

    A Bayesian adaptive procedure, the quick-auditory-filter (qAF) procedure, was used to estimate auditory-filter shapes that were asymmetric about their peaks. In three experiments, listeners who were naive to psychoacoustic experiments detected a fixed-level, pure-tone target presented with a spectrally notched noise masker. The qAF procedure adaptively manipulated the masker spectrum level and the position of the masker notch, which was optimized for the efficient estimation of the five parameters of an auditory-filter model. Experiment I demonstrated that the qAF procedure provided a convergent estimate of the auditory-filter shape at 2 kHz within 150 to 200 trials (approximately 15 min to complete) and, for a majority of listeners, excellent test-retest reliability. In experiment II, asymmetric auditory filters were estimated for target frequencies of 1 and 4 kHz and target levels of 30 and 50 dB sound pressure level. The estimated filter shapes were generally consistent with published norms, especially at the low target level. It is known that the auditory-filter estimates are narrower for forward masking than simultaneous masking due to peripheral suppression, a result replicated in experiment III using fewer than 200 qAF trials. PMID:25324086

  10. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    R. F. Lyon

    2007-12-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as “design curves” for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a “missing link” between physiological, electrical, and mechanical models for auditory filtering.

  11. Estimating individual listeners’ auditory-filter bandwidth in simultaneous and non-simultaneous masking

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Caminade, Sabine; Strelcyk, Olaf

    2010-01-01

    Frequency selectivity in the human auditory system is often measured using simultaneous masking of tones presented in notched noise. Based on such masking data, the equivalent rectangular bandwidth (ERB) of the auditory filters can be derived by applying the power spectrum model of masking....... Considering bandwidth estimates from previous studies based on forward masking, only average data across a number of subjects have been considered. The present study is concerned with bandwidth estimates in simultaneous and forward masking in individual normal-hearing subjects. In order to investigate...... the reliability of the individual estimates, a statistical resampling method is applied. It is demonstrated that a rather large set of experimental data is required to reliably estimate auditory filter bandwidth, particularly in the case of simultaneous masking. The poor overall reliability of the filter...

  12. The spectrotemporal filter mechanism of auditory selective attention

    Science.gov (United States)

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  13. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  14. The relation between otoacoustic emissions and the broadening of the auditory filter for higher levels

    NARCIS (Netherlands)

    Leeuw, A. R.; Dreschler, W. A.

    1998-01-01

    The active behaviour of outer hair cells (OHCs) is often used to explain two phenomena, namely otoacoustic emissions (OAEs) and the level dependence of auditory filters. Correlations between these two phenomena may contribute to the evidence of these hypotheses. In this study auditory filters were

  15. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  16. Auditory DUM neurons in a bush-cricket: A filter bank for carrier frequency.

    Science.gov (United States)

    Lefebvre, Paule Chloé; Seifert, Marvin; Stumpner, Andreas

    2018-05-01

    In bush-crickets the first stage of central auditory processing occurs in the prothoracic ganglion. About 15 to 50 different auditory dorsal unpaired median neurons (DUM neurons) exist but they have not been studied in any detail. These DUM neurons may be classified into seven different morphological types, although, there is only limited correlation between morphology and physiological responses. Ninety seven percent of the stained neurons were local, 3% were intersegmental. About 90% project nearly exclusively into the auditory neuropile, and 45% into restricted areas therein. Lateral extensions overlap with the axons of primary auditory sensory neurons close to their branching point. DUM neurons are typically tuned to frequencies covering the range between 2 and 50 kHz and thereby may establish a filter bank for carrier frequency. Less than 10% of DUM neurons have their branches in adjacent and more posterior regions of the auditory neuropile and are mostly tuned to low frequencies, less sensitive than the other types and respond to vibration. Thirty five percent of DUM show indications of inhibition, either through reduced responses at higher intensities, or by hyperpolarizing responses to sound. Most DUM neurons produce phasic spike responses preferably at higher intensities. Spikes may be elicited by intracellular current injection. Preliminary data suggest that auditory DUM neurons have GABA as transmitter and therefore may inhibit other auditory interneurons. From all known local auditory neurons, only DUM neurons have frequency specific responses which appear suited for local processing relevant for acoustic communication in bush crickets. © 2018 Wiley Periodicals, Inc.

  17. Analog and digital filtering of the brain stem auditory evoked response.

    Science.gov (United States)

    Kavanagh, K T; Franks, R

    1989-07-01

    This study compared the filtering effects on the auditory evoked potential of zero and standard phase shift digital filters (the former was a mathematical approximation of a standard Butterworth filter). Conventional filters were found to decrease the height of the evoked response in the majority of waveforms compared to zero phase shift filters. A 36-dB/octave zero phase shift high pass filter with a cutoff frequency of 100 Hz produced a 16% reduction in wave amplitude compared to the unfiltered control. A 36-dB/octave, 100-Hz standard phase shift high pass filter produced a 41% reduction, and a 12-dB/octave, 150-Hz standard phase shift high pass filter produced a 38% reduction in wave amplitude compared to the unfiltered control. A decrease in the mean along with an increase in the variability of wave IV/V latency was also noted with conventional compared to zero phase shift filters. The increase in the variability of the latency measurement was due to the difficulty in waveform identification caused by the phase shift distortion of the conventional filter along with the variable decrease in wave latency caused by phase shifting responses with different spectral content. Our results indicated that a zero phase shift high pass filter of 100 Hz was the most desirable filter studied for the mitigation of spontaneous brain activity and random muscle artifact.

  18. Effects of analog and digital filtering on auditory middle latency responses in adults and young children.

    Science.gov (United States)

    Suzuki, T; Hirabayashi, M; Kobayashi, K

    1984-01-01

    Effects of analog high pass (HP) filtering were compared with those of zero phase-shift digital filtering on the auditory middle latency responses (MLR) from nine adults and 16 young children with normal hearing. Analog HP filtering exerted several prominent effects on the MLR waveforms in both adults and young children, such as suppression of Po (ABR), enhancement of Nb, enhancement or emergence of Pb, and latency decrements for Pa and the later components. Analog HP filtering at 20 Hz produced more pronounced waveform distortions in the responses from young children than from adults. Much greater latency decrements for Pa and Nb were observed for young children than for adults in the analog HP-filtered responses at 20 Hz. A large positive peak (Pb) emerged at about 65 ms after the stimulus onset. From these results, the use of digital HP filtering at 20 Hz is strongly recommended for obtaining unbiased and stable MLR in young children.

  19. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  20. Tuning of Human Modulation Filters Is Carrier-Frequency Dependent

    Science.gov (United States)

    Simpson, Andrew J. R.; Reiss, Joshua D.; McAlpine, David

    2013-01-01

    Recent studies employing speech stimuli to investigate ‘cocktail-party’ listening have focused on entrainment of cortical activity to modulations at syllabic (5 Hz) and phonemic (20 Hz) rates. The data suggest that cortical modulation filters (CMFs) are dependent on the sound-frequency channel in which modulations are conveyed, potentially underpinning a strategy for separating speech from background noise. Here, we characterize modulation filters in human listeners using a novel behavioral method. Within an ‘inverted’ adaptive forced-choice increment detection task, listening level was varied whilst contrast was held constant for ramped increments with effective modulation rates between 0.5 and 33 Hz. Our data suggest that modulation filters are tonotopically organized (i.e., vary along the primary, frequency-organized, dimension). This suggests that the human auditory system is optimized to track rapid (phonemic) modulations at high sound-frequencies and slow (prosodic/syllabic) modulations at low frequencies. PMID:24009759

  1. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  2. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  3. Audlet Filter Banks: A Versatile Analysis/Synthesis Framework Using Auditory Frequency Scales

    Directory of Open Access Journals (Sweden)

    Thibaud Necciari

    2018-01-01

    Full Text Available Many audio applications rely on filter banks (FBs to analyze, process, and re-synthesize sounds. For these applications, an important property of the analysis–synthesis system is the reconstruction error; it has to be minimized to avoid audible artifacts. Other advantageous properties include stability and low redundancy. To exploit some aspects of auditory perception in the signal chain, some applications rely on FBs that approximate the frequency analysis performed in the auditory periphery, the gammatone FB being a popular example. However, current gammatone FBs only allow partial reconstruction and stability at high redundancies. In this article, we construct an analysis–synthesis system for audio applications. The proposed system, referred to as Audlet, is an oversampled FB with filters distributed on auditory frequency scales. It allows perfect reconstruction for a wide range of FB settings (e.g., the shape and density of filters, efficient FB design, and adaptable redundancy. In particular, we show how to construct a gammatone FB with perfect reconstruction. Experiments demonstrate performance improvements of the proposed gammatone FB when compared to current gammatone FBs in terms of reconstruction error and stability, especially at low redundancies. An application of the framework to audio source separation illustrates its utility for audio processing.

  4. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  5. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  6. Functional sex differences in human primary auditory cortex

    International Nuclear Information System (INIS)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W.J.; Willemsen, Antoon T.M.

    2007-01-01

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  7. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  8. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  9. Intracerebral evidence of rhythm transform in the human auditory cortex.

    Science.gov (United States)

    Nozaradan, Sylvie; Mouraux, André; Jonas, Jacques; Colnat-Coulbois, Sophie; Rossion, Bruno; Maillard, Louis

    2017-07-01

    Musical entrainment is shared by all human cultures and the perception of a periodic beat is a cornerstone of this entrainment behavior. Here, we investigated whether beat perception might have its roots in the earliest stages of auditory cortical processing. Local field potentials were recorded from 8 patients implanted with depth-electrodes in Heschl's gyrus and the planum temporale (55 recording sites in total), usually considered as human primary and secondary auditory cortices. Using a frequency-tagging approach, we show that both low-frequency (30 Hz) neural activities in these structures faithfully track auditory rhythms through frequency-locking to the rhythm envelope. A selective gain in amplitude of the response frequency-locked to the beat frequency was observed for the low-frequency activities but not for the high-frequency activities, and was sharper in the planum temporale, especially for the more challenging syncopated rhythm. Hence, this gain process is not systematic in all activities produced in these areas and depends on the complexity of the rhythmic input. Moreover, this gain was disrupted when the rhythm was presented at fast speed, revealing low-pass response properties which could account for the propensity to perceive a beat only within the musical tempo range. Together, these observations show that, even though part of these neural transforms of rhythms could already take place in subcortical auditory processes, the earliest auditory cortical processes shape the neural representation of rhythmic inputs in favor of the emergence of a periodic beat.

  10. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Alva Engell

    Full Text Available Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency, followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  11. Modulatory Effects of Attention on Lateral Inhibition in the Human Auditory Cortex.

    Science.gov (United States)

    Engell, Alva; Junghöfer, Markus; Stein, Alwina; Lau, Pia; Wunderlich, Robert; Wollbrink, Andreas; Pantev, Christo

    2016-01-01

    Reduced neural processing of a tone is observed when it is presented after a sound whose spectral range closely frames the frequency of the tone. This observation might be explained by the mechanism of lateral inhibition (LI) due to inhibitory interneurons in the auditory system. So far, several characteristics of bottom up influences on LI have been identified, while the influence of top-down processes such as directed attention on LI has not been investigated. Hence, the study at hand aims at investigating the modulatory effects of focused attention on LI in the human auditory cortex. In the magnetoencephalograph, we present two types of masking sounds (white noise vs. withe noise passing through a notch filter centered at a specific frequency), followed by a test tone with a frequency corresponding to the center-frequency of the notch filter. Simultaneously, subjects were presented with visual input on a screen. To modulate the focus of attention, subjects were instructed to concentrate either on the auditory input or the visual stimuli. More specific, on one half of the trials, subjects were instructed to detect small deviations in loudness in the masking sounds while on the other half of the trials subjects were asked to detect target stimuli on the screen. The results revealed a reduction in neural activation due to LI, which was larger during auditory compared to visual focused attention. Attentional modulations of LI were observed in two post-N1m time intervals. These findings underline the robustness of reduced neural activation due to LI in the auditory cortex and point towards the important role of attention on the modulation of this mechanism in more evaluative processing stages.

  12. Differences in auditory timing between human and nonhuman primates

    NARCIS (Netherlands)

    Honing, H.; Merchant, H.

    2014-01-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such

  13. Human auditory steady state responses to binaural and monaural beats.

    Science.gov (United States)

    Schwarz, D W F; Taylor, P

    2005-03-01

    Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.

  14. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  15. Human attention filters for single colors

    Science.gov (United States)

    Sun, Peng; Chubb, Charles; Wright, Charles E.; Sperling, George

    2016-01-01

    The visual images in the eyes contain much more information than the brain can process. An important selection mechanism is feature-based attention (FBA). FBA is best described by attention filters that specify precisely the extent to which items containing attended features are selectively processed and the extent to which items that do not contain the attended features are attenuated. The centroid-judgment paradigm enables quick, precise measurements of such human perceptual attention filters, analogous to transmission measurements of photographic color filters. Subjects use a mouse to locate the centroid—the center of gravity—of a briefly displayed cloud of dots and receive precise feedback. A subset of dots is distinguished by some characteristic, such as a different color, and subjects judge the centroid of only the distinguished subset (e.g., dots of a particular color). The analysis efficiently determines the precise weight in the judged centroid of dots of every color in the display (i.e., the attention filter for the particular attended color in that context). We report 32 attention filters for single colors. Attention filters that discriminate one saturated hue from among seven other equiluminant distractor hues are extraordinarily selective, achieving attended/unattended weight ratios >20:1. Attention filters for selecting a color that differs in saturation or lightness from distractors are much less selective than attention filters for hue (given equal discriminability of the colors), and their filter selectivities are proportional to the discriminability distance of neighboring colors, whereas in the same range hue attention-filter selectivity is virtually independent of discriminabilty. PMID:27791040

  16. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  17. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  18. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    Science.gov (United States)

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  19. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    in listeners with SNHL, it is likely that HI listeners rely on the enhanced envelope cues to retrieve the pitch of unresolved harmonics. Hence, the relative importance of pitch cues may be altered in HI listeners, whereby envelope cues may be used instead of TFS cues to obtain a similar performance in pitch......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...... that are necessary for the auditory system to retrieve the pitch of complex sounds. The existence of different pitch-coding mechanisms for low-numbered (spectrally resolved) and high-numbered (unresolved) harmonics was investigated by comparing pitch-discrimination performance across different cohorts of listeners...

  20. Glottal inverse filtering analysis of human voice production — A ...

    Indian Academy of Sciences (India)

    A (grossly) simplified manner to study the functioning of the human speech production ...... selective auditory impairment in autism: can perceive but do not attend, Proc. Natl. Acad. .... Fritzell B 1996 Voice disorders and occupations, Logoped.

  1. The effect of compression on tuning estimates in a simple nonlinear auditory filter model

    DEFF Research Database (Denmark)

    Marschall, Marton; MacDonald, Ewen; Dau, Torsten

    2013-01-01

    Behavioral experiments using auditory masking have been used to characterize frequency selectivity, one of the basic properties of the auditory system. However, due to the nonlinear response of the basilar membrane, the interpretation of these experiments may not be straightforward. Specifically,...

  2. Persistent neural activity in auditory cortex is related to auditory working memory in humans and nonhuman primates.

    Science.gov (United States)

    Huang, Ying; Matysiak, Artur; Heil, Peter; König, Reinhard; Brosch, Michael

    2016-07-20

    Working memory is the cognitive capacity of short-term storage of information for goal-directed behaviors. Where and how this capacity is implemented in the brain are unresolved questions. We show that auditory cortex stores information by persistent changes of neural activity. We separated activity related to working memory from activity related to other mental processes by having humans and monkeys perform different tasks with varying working memory demands on the same sound sequences. Working memory was reflected in the spiking activity of individual neurons in auditory cortex and in the activity of neuronal populations, that is, in local field potentials and magnetic fields. Our results provide direct support for the idea that temporary storage of information recruits the same brain areas that also process the information. Because similar activity was observed in the two species, the cellular bases of some auditory working memory processes in humans can be studied in monkeys.

  3. Human-Manipulator Interface Using Particle Filter

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2014-01-01

    Full Text Available This paper utilizes a human-robot interface system which incorporates particle filter (PF and adaptive multispace transformation (AMT to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.

  4. Diffusion tractography of the subcortical auditory system in a postmortem human brain

    OpenAIRE

    Sitek, Kevin

    2017-01-01

    The subcortical auditory system is challenging to identify with standard human brain imaging techniques: MRI signal decreases toward the center of the brain as well as at higher resolution, both of which are necessary for imaging small brainstem auditory structures.Using high-resolution diffusion-weighted MRI, we asked:Can we identify auditory structures and connections in high-resolution ex vivo images?Which structures and connections can be mapped in vivo?

  5. Aging Affects Adaptation to Sound-Level Statistics in Human Auditory Cortex.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Johnsrude, Ingrid S

    2018-02-21

    Optimal perception requires efficient and adaptive neural processing of sensory input. Neurons in nonhuman mammals adapt to the statistical properties of acoustic feature distributions such that they become sensitive to sounds that are most likely to occur in the environment. However, whether human auditory responses adapt to stimulus statistical distributions and how aging affects adaptation to stimulus statistics is unknown. We used MEG to study how exposure to different distributions of sound levels affects adaptation in auditory cortex of younger (mean: 25 years; n = 19) and older (mean: 64 years; n = 20) adults (male and female). Participants passively listened to two sound-level distributions with different modes (either 15 or 45 dB sensation level). In a control block with long interstimulus intervals, allowing neural populations to recover from adaptation, neural response magnitudes were similar between younger and older adults. Critically, both age groups demonstrated adaptation to sound-level stimulus statistics, but adaptation was altered for older compared with younger people: in the older group, neural responses continued to be sensitive to sound level under conditions in which responses were fully adapted in the younger group. The lack of full adaptation to the statistics of the sensory environment may be a physiological mechanism underlying the known difficulty that older adults have with filtering out irrelevant sensory information. SIGNIFICANCE STATEMENT Behavior requires efficient processing of acoustic stimulation. Animal work suggests that neurons accomplish efficient processing by adjusting their response sensitivity depending on statistical properties of the acoustic environment. Little is known about the extent to which this adaptation to stimulus statistics generalizes to humans, particularly to older humans. We used MEG to investigate how aging influences adaptation to sound-level statistics. Listeners were presented with sounds drawn from

  6. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  7. Tinnitus alters resting state functional connectivity (RSFC in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS.

    Directory of Open Access Journals (Sweden)

    Juan San Juan

    Full Text Available Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex and non-region of interest (adjacent non-auditory cortices and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz, broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to

  8. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997...... discrimination with pure tones and broadband noise, tone-in-noise detection, spectral masking with narrow-band signals and maskers, forward masking with tone signals and tone or noise maskers, and amplitude-modulation detection with narrow- and wideband noise carriers. The model can account for most of the key...... properties of the data and is more powerful than the original model. The model might be useful as a front end in technical applications....

  9. The role of auditory spectro-temporal modulation filtering and the decision metric for speech intelligibility prediction

    DEFF Research Database (Denmark)

    Chabot-Leclerc, Alexandre; Jørgensen, Søren; Dau, Torsten

    2014-01-01

    Speech intelligibility models typically consist of a preprocessing part that transforms stimuli into some internal (auditory) representation and a decision metric that relates the internal representation to speech intelligibility. The present study analyzed the role of modulation filtering...... in the preprocessing of different speech intelligibility models by comparing predictions from models that either assume a spectro-temporal (i.e., two-dimensional) or a temporal-only (i.e., one-dimensional) modulation filterbank. Furthermore, the role of the decision metric for speech intelligibility was investigated...... subtraction. The results suggested that a decision metric based on the SNRenv may provide a more general basis for predicting speech intelligibility than a metric based on the MTF. Moreover, the one-dimensional modulation filtering process was found to be sufficient to account for the data when combined...

  10. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement. PMID:26132703

  11. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  12. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  13. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  14. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  15. The human brain maintains contradictory and redundant auditory sensory predictions.

    Directory of Open Access Journals (Sweden)

    Marika Pieszek

    Full Text Available Computational and experimental research has revealed that auditory sensory predictions are derived from regularities of the current environment by using internal generative models. However, so far, what has not been addressed is how the auditory system handles situations giving rise to redundant or even contradictory predictions derived from different sources of information. To this end, we measured error signals in the event-related brain potentials (ERPs in response to violations of auditory predictions. Sounds could be predicted on the basis of overall probability, i.e., one sound was presented frequently and another sound rarely. Furthermore, each sound was predicted by an informative visual cue. Participants' task was to use the cue and to discriminate the two sounds as fast as possible. Violations of the probability based prediction (i.e., a rare sound as well as violations of the visual-auditory prediction (i.e., an incongruent sound elicited error signals in the ERPs (Mismatch Negativity [MMN] and Incongruency Response [IR]. Particular error signals were observed even in case the overall probability and the visual symbol predicted different sounds. That is, the auditory system concurrently maintains and tests contradictory predictions. Moreover, if the same sound was predicted, we observed an additive error signal (scalp potential and primary current density equaling the sum of the specific error signals. Thus, the auditory system maintains and tolerates functionally independently represented redundant and contradictory predictions. We argue that the auditory system exploits all currently active regularities in order to optimally prepare for future events.

  16. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach

    Science.gov (United States)

    Miran, Sina; Akram, Sahar; Sheikhattar, Alireza; Simon, Jonathan Z.; Zhang, Tao; Babadi, Behtash

    2018-01-01

    Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed

  17. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach

    Directory of Open Access Journals (Sweden)

    Sina Miran

    2018-05-01

    Full Text Available Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG and electroencephalography (EEG. To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach or vice versa (the encoding approach. To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1 Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2 Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3 Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our

  18. Functional studies of the human auditory cortex, auditory memory and musical hallucinations

    International Nuclear Information System (INIS)

    Goycoolea, Marcos; Mena, Ismael; Neubauer, Sonia

    2004-01-01

    Objectives. 1. To determine which areas of the cerebral cortex are activated stimulating the left ear with pure tones, and what type of stimulation occurs (eg. excitatory or inhibitory) in these different areas. 2. To use this information as an initial step to develop a normal functional data base for future studies. 3. To try to determine if there is a biological substrate to the process of recalling previous auditory perceptions and if possible, suggest a locus for auditory memory. Method. Brain perfusion single photon emission computerized tomography (SPECT) evaluation was conducted: 1-2) Using auditory stimulation with pure tones in 4 volunteers with normal hearing. 3) In a patient with bilateral profound hearing loss who had auditory perception of previous musical experiences; while injected with Tc99m HMPAO while she was having the sensation of hearing a well known melody. Results. Both in the patient with auditory hallucinations and the normal controls -stimulated with pure tones- there was a statistically significant increase in perfusion in Brodmann's area 39, more intense on the right side (right to left p < 0.05). With a lesser intensity there was activation in the adjacent area 40 and there was intense activation also in the executive frontal cortex areas 6, 8, 9, and 10 of Brodmann. There was also activation of area 7 of Brodmann; an audio-visual association area; more marked on the right side in the patient and the normal stimulated controls. In the subcortical structures there was also marked activation in the patient with hallucinations in both lentiform nuclei, thalamus and caudate nuclei also more intense in the right hemisphere, 5, 4.7 and 4.2 S.D. above the mean respectively and 5, 3.3, and 3 S.D. above the normal mean in the left hemisphere respectively. Similar findings were observed in normal controls. Conclusions. After auditory stimulation with pure tones in the left ear of normal female volunteers, there is bilateral activation of area 39

  19. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  20. Methodological aspects in the determination of the auditory filters and critical band at low and mid-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Møller, Henrik; Pedersen, Christian Sejer

    2008-01-01

    or after the experiment, normally being applied afterwards. Due to the non-linear characteristics of the cochlear amplifier, it is arguable whether postexperimental weighting is a proper approach, or whether at low frequencies there will be any difference between pre or post stimuli weighting. Listening......In order to evaluate loudness or audibility of complex sounds, knowledge of the auditory filter characteristics is necessary. At low frequencies, where both the threshold of hearing and dynamic range become considerably frequency dependent, care must be taken to account for this both in the psycho......-acoustical model and the methodological approach. To account for variation in hearing sensitivity at low frequencies, equal loudness contours have been used to weight the stimuli accordingly. At mid and high frequencies, threshold of hearing curves have been used. These stimuli weightings can be applied before...

  1. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s). However, at longer retention intervals (8-32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  2. Achilles' ear? Inferior human short-term and recognition memory in the auditory modality.

    Directory of Open Access Journals (Sweden)

    James Bigelow

    Full Text Available Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects' retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1-4 s. However, at longer retention intervals (8-32 s, accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices.

  3. Achilles’ Ear? Inferior Human Short-Term and Recognition Memory in the Auditory Modality

    Science.gov (United States)

    Bigelow, James; Poremba, Amy

    2014-01-01

    Studies of the memory capabilities of nonhuman primates have consistently revealed a relative weakness for auditory compared to visual or tactile stimuli: extensive training is required to learn auditory memory tasks, and subjects are only capable of retaining acoustic information for a brief period of time. Whether a parallel deficit exists in human auditory memory remains an outstanding question. In the current study, a short-term memory paradigm was used to test human subjects’ retention of simple auditory, visual, and tactile stimuli that were carefully equated in terms of discriminability, stimulus exposure time, and temporal dynamics. Mean accuracy did not differ significantly among sensory modalities at very short retention intervals (1–4 s). However, at longer retention intervals (8–32 s), accuracy for auditory stimuli fell substantially below that observed for visual and tactile stimuli. In the interest of extending the ecological validity of these findings, a second experiment tested recognition memory for complex, naturalistic stimuli that would likely be encountered in everyday life. Subjects were able to identify all stimuli when retention was not required, however, recognition accuracy following a delay period was again inferior for auditory compared to visual and tactile stimuli. Thus, the outcomes of both experiments provide a human parallel to the pattern of results observed in nonhuman primates. The results are interpreted in light of neuropsychological data from nonhuman primates, which suggest a difference in the degree to which auditory, visual, and tactile memory are mediated by the perirhinal and entorhinal cortices. PMID:24587119

  4. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  5. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  6. Post training REMs coincident auditory stimulation enhances memory in humans.

    Science.gov (United States)

    Smith, C; Weeden, K

    1990-06-01

    Sleep activity was monitored in 20 freshman college students for two consecutive nights. Subjects were assigned to 4 equal groups and all were asked to learn a complex logic task before bed on the second night. Two groups of subjects learned the task with a constant clicking noise in the background (cued groups), while two groups simply learned the task (non cued). During the night, one cued and one non cued group were presented with auditory clicks during REM sleep such as to coincide with all REMs of at least 100 microvolts. The second cued group was given auditory clicks during REM sleep, but only during the REMs "quiet" times. The second non-cued control group was never given any nighttime auditory stimulations. The cued REMs coincident group showed a significant 23% improvement in task performance when tested one week later. The non cued REMs coincident group showed only an 8.8% improvement which was not significant. The cued REMs quiet and non-stimulated control groups showed no change in task performance when retested. The results were interpreted as support for the idea that the cued auditory stimulation induced a "recall" of the learned material during the REM sleep state in order for further memory processing to take place.

  7. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  8. MODELING SPECTRAL AND TEMPORAL MASKING IN THE HUMAN AUDITORY SYSTEM

    DEFF Research Database (Denmark)

    Dau, Torsten; Jepsen, Morten Løve; Ewert, Stephan D.

    2007-01-01

    An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997)] but inclu......An auditory signal processing model is presented that simulates psychoacoustical data from a large variety of experimental conditions related to spectral and temporal masking. The model is based on the modulation filterbank model by Dau et al. [J. Acoust. Soc. Am. 102, 2892-2905 (1997...... was tested in conditions of tone-in-noise masking, intensity discrimination, spectral masking with tones and narrowband noises, forward masking with (on- and off-frequency) noise- and pure-tone maskers, and amplitude modulation detection using different noise carrier bandwidths. One of the key properties...

  9. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  10. Functional Changes in the Human Auditory Cortex in Ageing

    Science.gov (United States)

    Profant, Oliver; Tintěra, Jaroslav; Balogová, Zuzana; Ibrahim, Ibrahim; Jilek, Milan; Syka, Josef

    2015-01-01

    Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years) and compared the results with young subjects (presbycusis (EP) differed from the elderly group with mild presbycusis (MP) in hearing thresholds measured by pure tone audiometry, presence and amplitudes of transient otoacoustic emissions (TEOAE) and distortion-product oto-acoustic emissions (DPOAE), as well as in speech-understanding under noisy conditions. Acoustically evoked activity (pink noise centered around 350 Hz, 700 Hz, 1.5 kHz, 3 kHz, 8 kHz), recorded by BOLD fMRI from an area centered on Heschl’s gyrus, was used to determine age-related changes at the level of the auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC) leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing. PMID:25734519

  11. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  12. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  13. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  14. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  15. The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention.

    Science.gov (United States)

    Forte, Antonio Elia; Etard, Octave; Reichenbach, Tobias

    2017-10-10

    Humans excel at selectively listening to a target speaker in background noise such as competing voices. While the encoding of speech in the auditory cortex is modulated by selective attention, it remains debated whether such modulation occurs already in subcortical auditory structures. Investigating the contribution of the human brainstem to attention has, in particular, been hindered by the tiny amplitude of the brainstem response. Its measurement normally requires a large number of repetitions of the same short sound stimuli, which may lead to a loss of attention and to neural adaptation. Here we develop a mathematical method to measure the auditory brainstem response to running speech, an acoustic stimulus that does not repeat and that has a high ecological validity. We employ this method to assess the brainstem's activity when a subject listens to one of two competing speakers, and show that the brainstem response is consistently modulated by attention.

  16. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    that the use of spectral cues remained plausible. Simulations of auditory-nerve representations of the complex tones further suggested that a spectrotemporal mechanism combining precise timing information across auditory channels might best account for the behavioral data. Overall, this work provides insights...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...... of reduced frequency selectivity on binaural pitch perception, the ability to accurately process the temporal fine structure (TFS) of sounds at the output of the cochlear filters was found to be essential for perceiving binaural pitch. Monaural TFS processing also played a major and independent role...

  17. Human pupillary dilation response to deviant auditory stimuli: Effects of stimulus properties and voluntary attention

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2016-02-01

    Full Text Available A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants’ pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  18. Human Pupillary Dilation Response to Deviant Auditory Stimuli: Effects of Stimulus Properties and Voluntary Attention.

    Science.gov (United States)

    Liao, Hsin-I; Yoneya, Makoto; Kidani, Shunsuke; Kashino, Makio; Furukawa, Shigeto

    2016-01-01

    A unique sound that deviates from a repetitive background sound induces signature neural responses, such as mismatch negativity and novelty P3 response in electro-encephalography studies. Here we show that a deviant auditory stimulus induces a human pupillary dilation response (PDR) that is sensitive to the stimulus properties and irrespective whether attention is directed to the sounds or not. In an auditory oddball sequence, we used white noise and 2000-Hz tones as oddballs against repeated 1000-Hz tones. Participants' pupillary responses were recorded while they listened to the auditory oddball sequence. In Experiment 1, they were not involved in any task. Results show that pupils dilated to the noise oddballs for approximately 4 s, but no such PDR was found for the 2000-Hz tone oddballs. In Experiments 2, two types of visual oddballs were presented synchronously with the auditory oddballs. Participants discriminated the auditory or visual oddballs while trying to ignore stimuli from the other modality. The purpose of this manipulation was to direct attention to or away from the auditory sequence. In Experiment 3, the visual oddballs and the auditory oddballs were always presented asynchronously to prevent residuals of attention on to-be-ignored oddballs due to the concurrence with the attended oddballs. Results show that pupils dilated to both the noise and 2000-Hz tone oddballs in all conditions. Most importantly, PDRs to noise were larger than those to the 2000-Hz tone oddballs regardless of the attention condition in both experiments. The overall results suggest that the stimulus-dependent factor of the PDR appears to be independent of attention.

  19. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...

  20. Level-Dependent Nonlinear Hearing Protector Model in the Auditory Hazard Assessment Algorithm for Humans

    Science.gov (United States)

    2015-04-01

    HPD model. In an article on measuring HPD attenuation, Berger (1986) points out that Real Ear Attenuation at Threshold (REAT) tests are...men. Audiology . 1991;30:345–356. Fedele P, Binseel M, Kalb J, Price GR. Using the auditory hazard assessment algorithm for humans (AHAAH) with

  1. Searching for the optimal stimulus eliciting auditory brainstem responses in humans

    DEFF Research Database (Denmark)

    Fobel, Oliver; Dau, Torsten

    2004-01-01

    -chirp, was based on estimates of human basilar membrane (BM) group delays derived from stimulus-frequency otoacoustic emissions (SFOAE) at a sound pressure level of 40 dB [Shera and Guinan, in Recent Developments in Auditory Mechanics (2000)]. The other chirp, referred to as the A-chirp, was derived from latency...

  2. Behavioral lifetime of human auditory sensory memory predicted by physiological measures.

    Science.gov (United States)

    Lu, Z L; Williamson, S J; Kaufman, L

    1992-12-04

    Noninvasive magnetoencephalography makes it possible to identify the cortical area in the human brain whose activity reflects the decay of passive sensory storage of information about auditory stimuli (echoic memory). The lifetime for decay of the neuronal activation trace in primary auditory cortex was found to predict the psychophysically determined duration of memory for the loudness of a tone. Although memory for the loudness of a specific tone is lost, the remembered loudness decays toward the global mean of all of the loudnesses to which a subject is exposed in a series of trials.

  3. Binaural fusion and the representation of virtual pitch in the human auditory cortex.

    Science.gov (United States)

    Pantev, C; Elbert, T; Ross, B; Eulitz, C; Terhardt, E

    1996-10-01

    The auditory system derives the pitch of complex tones from the tone's harmonics. Research in psychoacoustics predicted that binaural fusion was an important feature of pitch processing. Based on neuromagnetic human data, the first neurophysiological confirmation of binaural fusion in hearing is presented. The centre of activation within the cortical tonotopic map corresponds to the location of the perceived pitch and not to the locations that are activated when the single frequency constituents are presented. This is also true when the different harmonics of a complex tone are presented dichotically. We conclude that the pitch processor includes binaural fusion to determine the particular pitch location which is activated in the auditory cortex.

  4. Auditory-somatosensory bimodal stimulation desynchronizes brain circuitry to reduce tinnitus in guinea pigs and humans.

    Science.gov (United States)

    Marks, Kendra L; Martel, David T; Wu, Calvin; Basura, Gregory J; Roberts, Larry E; Schvartz-Leyzac, Kara C; Shore, Susan E

    2018-01-03

    The dorsal cochlear nucleus is the first site of multisensory convergence in mammalian auditory pathways. Principal output neurons, the fusiform cells, integrate auditory nerve inputs from the cochlea with somatosensory inputs from the head and neck. In previous work, we developed a guinea pig model of tinnitus induced by noise exposure and showed that the fusiform cells in these animals exhibited increased spontaneous activity and cross-unit synchrony, which are physiological correlates of tinnitus. We delivered repeated bimodal auditory-somatosensory stimulation to the dorsal cochlear nucleus of guinea pigs with tinnitus, choosing a stimulus interval known to induce long-term depression (LTD). Twenty minutes per day of LTD-inducing bimodal (but not unimodal) stimulation reduced physiological and behavioral evidence of tinnitus in the guinea pigs after 25 days. Next, we applied the same bimodal treatment to 20 human subjects with tinnitus using a double-blinded, sham-controlled, crossover study. Twenty-eight days of LTD-inducing bimodal stimulation reduced tinnitus loudness and intrusiveness. Unimodal auditory stimulation did not deliver either benefit. Bimodal auditory-somatosensory stimulation that induces LTD in the dorsal cochlear nucleus may hold promise for suppressing chronic tinnitus, which reduces quality of life for millions of tinnitus sufferers worldwide. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  5. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS. SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  6. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  7. Time-frequency analysis with temporal and spectral resolution as the human auditory system

    DEFF Research Database (Denmark)

    Agerkvist, Finn T.

    1992-01-01

    The human perception of sound is a suitable area for the application of a simultaneous time-frequency analysis, since the ear is selective in both domains. A perfect reconstruction filter bank with bandwidths approximating the critical bands is presented. The orthogonality of the filter makes...... it possible to examine the masking effect with realistic signals. The tree structure of the filter bank makes it difficult to obtain well-attenuated stop-bands. The use of filters of different length solves this problem...

  8. Towards Clinical Application of Neurotrophic Factors to the Auditory Nerve; Assessment of Safety and Efficacy by a Systematic Review of Neurotrophic Treatments in Humans

    NARCIS (Netherlands)

    Bezdjian, Aren; Kraaijenga, Véronique J C; Ramekers, Dyan; Versnel, Huib; Thomeer, Hans G X M; Klis, Sjaak F L; Grolman, Wilko

    2016-01-01

    Animal studies have evidenced protection of the auditory nerve by exogenous neurotrophic factors. In order to assess clinical applicability of neurotrophic treatment of the auditory nerve, the safety and efficacy of neurotrophic therapies in various human disorders were systematically reviewed.

  9. Encoding of frequency-modulation (FM) rates in human auditory cortex.

    Science.gov (United States)

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2015-12-14

    Frequency-modulated sounds play an important role in our daily social life. However, it currently remains unclear whether frequency modulation rates affect neural activity in the human auditory cortex. In the present study, using magnetoencephalography, we investigated the auditory evoked N1m and sustained field responses elicited by temporally repeated and superimposed frequency-modulated sweeps that were matched in the spectral domain, but differed in frequency modulation rates (1, 4, 16, and 64 octaves per sec). The results obtained demonstrated that the higher rate frequency-modulated sweeps elicited the smaller N1m and the larger sustained field responses. Frequency modulation rate had a significant impact on the human brain responses, thereby providing a key for disentangling a series of natural frequency-modulated sounds such as speech and music.

  10. Extensive cochleotopic mapping of human auditory cortical fields obtained with phase-encoding FMRI.

    Directory of Open Access Journals (Sweden)

    Ella Striem-Amit

    Full Text Available The primary sensory cortices are characterized by a topographical mapping of basic sensory features which is considered to deteriorate in higher-order areas in favor of complex sensory features. Recently, however, retinotopic maps were also discovered in the higher-order visual, parietal and prefrontal cortices. The discovery of these maps enabled the distinction between visual regions, clarified their function and hierarchical processing. Could such extension of topographical mapping to high-order processing regions apply to the auditory modality as well? This question has been studied previously in animal models but only sporadically in humans, whose anatomical and functional organization may differ from that of animals (e.g. unique verbal functions and Heschl's gyrus curvature. Here we applied fMRI spectral analysis to investigate the cochleotopic organization of the human cerebral cortex. We found multiple mirror-symmetric novel cochleotopic maps covering most of the core and high-order human auditory cortex, including regions considered non-cochleotopic, stretching all the way to the superior temporal sulcus. These maps suggest that topographical mapping persists well beyond the auditory core and belt, and that the mirror-symmetry of topographical preferences may be a fundamental principle across sensory modalities.

  11. Plasticity of the human auditory cortex related to musical training.

    Science.gov (United States)

    Pantev, Christo; Herholz, Sibylle C

    2011-11-01

    During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Intonational speech prosody encoding in the human auditory cortex.

    Science.gov (United States)

    Tang, C; Hamilton, L S; Chang, E F

    2017-08-25

    Speakers of all human languages regularly use intonational pitch to convey linguistic meaning, such as to emphasize a particular word. Listeners extract pitch movements from speech and evaluate the shape of intonation contours independent of each speaker's pitch range. We used high-density electrocorticography to record neural population activity directly from the brain surface while participants listened to sentences that varied in intonational pitch contour, phonetic content, and speaker. Cortical activity at single electrodes over the human superior temporal gyrus selectively represented intonation contours. These electrodes were intermixed with, yet functionally distinct from, sites that encoded different information about phonetic features or speaker identity. Furthermore, the representation of intonation contours directly reflected the encoding of speaker-normalized relative pitch but not absolute pitch. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  13. Frequency-specific attentional modulation in human primary auditory cortex and midbrain.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-07-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning, yet comparable evidence in humans is scarce. Moreover, whether the spotlight operates in human midbrain is unknown. To address these issues, we studied the spectral tuning of frequency channels in human PAC and inferior colliculus (IC), using 7-T functional magnetic resonance imaging (FMRI) and frequency mapping, while participants focused on different frequency-specific sounds. We found that shifts in frequency-specific attention alter the response gain, but not tuning profile, of PAC frequency channels. The gain modulation was strongest in low-frequency channels and varied near-monotonically across the tonotopic axis, giving rise to the attentional spotlight. We observed less prominent, non-tonotopic spatial patterns of attentional modulation in IC. These results indicate that the frequency-specific attentional spotlight in human PAC as measured with FMRI arises primarily from tonotopic gain modulation, rather than adapted frequency tuning. Moreover, frequency-specific attentional modulation of afferent sound processing in human IC seems to be considerably weaker, suggesting that the spotlight diminishes toward this lower-order processing stage. Our study sheds light on how the human auditory pathway adapts to the different demands of selective hearing. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Effects of pre- and postnatal exposure to the UV-filter Octyl Methoxycinnamate (OMC) on the reproductive, auditory and neurological development of rat offspring

    International Nuclear Information System (INIS)

    Axelstad, Marta; Boberg, Julie; Hougaard, Karin Sorig; Christiansen, Sofie; Jacobsen, Pernille Rosenskjold; Mandrup, Karen Riiber; Nellemann, Christine; Lund, Soren Peter; Hass, Ulla

    2011-01-01

    Octyl Methoxycinnamate (OMC) is a frequently used UV-filter in sunscreens and other cosmetics. The aim of the present study was to address the potential endocrine disrupting properties of OMC, and to investigate how OMC induced changes in thyroid hormone levels would be related to the neurological development of treated offspring. Groups of 14-18 pregnant Wistar rats were dosed with 0, 500, 750 or 1000 mg OMC/kg bw/day during gestation and lactation. Serum thyroxine (T 4 ), testosterone, estradiol and progesterone levels were measured in dams and offspring. Anogenital distance, nipple retention, postnatal growth and timing of sexual maturation were assessed. On postnatal day 16, gene expression in prostate and testes, and weight and histopathology of the thyroid gland, liver, adrenals, prostate, testes, epididymis and ovaries were measured. After weaning, offspring were evaluated in a battery of behavioral and neurophysiological tests, including tests of activity, startle response, cognitive and auditory function. In adult animals, reproductive organ weights and semen quality were investigated. Thyroxine (T 4 ) levels showed a very marked decrease during the dosing period in all dosed dams, but were less severely affected in the offspring. On postnatal day 16, high dose male offspring showed reduced relative prostate and testis weights, and a dose-dependent decrease in testosterone levels. In OMC exposed female offspring, motor activity levels were decreased, while low and high dose males showed improved spatial learning abilities. The observed behavioral changes were probably not mediated solely by early T 4 deficiencies, as the observed effects differed from those seen in other studies of developmental hypothyroxinemia. At eight months of age, sperm counts were reduced in all three OMC-dosed groups, and prostate weights were reduced in the highest dose group. Taken together, these results indicate that perinatal OMC-exposure can affect both the reproductive and

  15. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  16. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    Science.gov (United States)

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  17. Attention effects at auditory periphery derived from human scalp potentials: displacement measure of potentials.

    Science.gov (United States)

    Ikeda, Kazunari; Hayashi, Akiko; Sekiguchi, Takahiro; Era, Shukichi

    2006-10-01

    It is known in humans that electrophysiological measures such as the auditory brainstem response (ABR) are difficult to identify the attention effect at the auditory periphery, whereas the centrifugal effect has been detected by measuring otoacoustic emissions. This research developed a measure responsive to the shift of human scalp potentials within a brief post-stimulus period (13 ms), that is, displacement percentage, and applied it to an experiment to retrieve the peripheral attention effect. In the present experimental paradigm, tone pips were exposed to the left ear whereas the other ear was masked by white noise. Twelve participants each conducted two conditions of either ignoring or attending to the tone pips. Relative to averaged scalp potentials in the ignoring condition, the shift of the potentials was found within early component range during the attentive condition, and displacement percentage then revealed a significant magnitude difference between the two conditions. These results suggest that, using a measure representing the potential shift itself, the peripheral effect of attention can be detected from human scalp potentials.

  18. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators. Copyright © 2014. Published by Elsevier Inc.

  19. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  20. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment.

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus-tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  1. Evolution of the auditory ossicles in extant hominids: metric variation in African apes and humans

    Science.gov (United States)

    Quam, Rolf M; Coleman, Mark N; Martínez, Ignacio

    2014-01-01

    The auditory ossicles in primates have proven to be a reliable source of phylogenetic information. Nevertheless, to date, very little data have been published on the metric dimensions of the ear ossicles in African apes and humans. The present study relies on the largest samples of African ape ear ossicles studied to date to address questions of taxonomic differences and the evolutionary transformation of the ossicles in gorillas, chimpanzees and humans. Both African ape taxa show a malleus that is characterized by a long and slender manubrium and relatively short corpus, whereas humans show the opposite constellation of a short and thick manubrium and relatively long corpus. These changes in the manubrium are plausibly linked with changes in the size of the tympanic membrane. The main difference between the incus in African apes and humans seems to be related to changes in the functional length. Compared with chimpanzees, human incudes are larger in nearly all dimensions, except articular facet height, and show a more open angle between the axes. The gorilla incus resembles humans more closely in its metric dimensions, including functional length, perhaps as a result of the dramatically larger body size compared with chimpanzees. The differences between the stapedes of humans and African apes are primarily size-related, with humans being larger in nearly all dimensions. Nevertheless, some distinctions between the African apes were found in the obturator foramen and head height. Although correlations between metric variables in different ossicles were generally lower than those between variables in the same bone, variables of the malleus/incus complex appear to be more strongly correlated than those of the incus/stapes complex, perhaps reflecting the different embryological and evolutionary origins of the ossicles. The middle ear lever ratio for the African apes is similar to other haplorhines, but humans show the lowest lever ratio within primates. Very low levels

  2. Effect of Bluetooth headset and mobile phone electromagnetic fields on the human auditory nerve.

    Science.gov (United States)

    Mandalà, Marco; Colletti, Vittorio; Sacchetto, Luca; Manganotti, Paolo; Ramat, Stefano; Marcocci, Alessandro; Colletti, Liliana

    2014-01-01

    The possibility that long-term mobile phone use increases the incidence of astrocytoma, glioma and acoustic neuroma has been investigated in several studies. Recently, our group showed that direct exposure (in a surgical setting) to cell phone electromagnetic fields (EMFs) induces deterioration of auditory evoked cochlear nerve compound action potential (CNAP) in humans. To verify whether the use of Bluetooth devices reduces these effects, we conducted the present study with the same experimental protocol. Randomized trial. Twelve patients underwent retrosigmoid vestibular neurectomy to treat definite unilateral Ménière's disease while being monitored with acoustically evoked CNAPs to assess direct mobile phone exposure or alternatively the EMF effects of Bluetooth headsets. We found no short-term effects of Bluetooth EMFs on the auditory nervous structures, whereas direct mobile phone EMF exposure confirmed a significant decrease in CNAPs amplitude and an increase in latency in all subjects. The outcomes of the present study show that, contrary to the finding that the latency and amplitude of CNAPs are very sensitive to EMFs produced by the tested mobile phone, the EMFs produced by a common Bluetooth device do not induce any significant change in cochlear nerve activity. The conditions of exposure, therefore, differ from those of everyday life, in which various biological tissues may reduce the EMF affecting the cochlear nerve. Nevertheless, these novel findings may have important safety implications. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  3. Selective attention reduces physiological noise in the external ear canals of humans. I: Auditory attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2014-01-01

    In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring, or not requiring, selective auditory attention. Appended to each stimulus presentation, and included in the calculation of each nSFOAE response, was a 30-ms silent period that was used to estimate the level of the inherent physiological noise in the ear canals of our subjects during each behavioral condition. Physiological-noise magnitudes were higher (noisier) for all subjects in the inattention task, and lower (quieter) in the selective auditory-attention tasks. These noise measures initially were made at the frequency of our nSFOAE probe tone (4.0 kHz), but the same attention effects also were observed across a wide range of frequencies. We attribute the observed differences in physiological-noise magnitudes between the inattention and attention conditions to different levels of efferent activation associated with the differing attentional demands of the behavioral tasks. One hypothesis is that when the attentional demand is relatively great, efferent activation is relatively high, and a decrease in the gain of the cochlear amplifier leads to lower-amplitude cochlear activity, and thus a smaller measure of noise from the ear. PMID:24732069

  4. What's that sound? Matches with auditory long-term memory induce gamma activity in human EEG.

    Science.gov (United States)

    Lenz, Daniel; Schadow, Jeanette; Thaerig, Stefanie; Busch, Niko A; Herrmann, Christoph S

    2007-04-01

    In recent years the cognitive functions of human gamma-band activity (30-100 Hz) advanced continuously into scientific focus. Not only bottom-up driven influences on 40 Hz activity have been observed, but also top-down processes seem to modulate responses in this frequency band. Among the various functions that have been related to gamma activity a pivotal role has been assigned to memory processes. Visual experiments suggested that gamma activity is involved in matching visual input to memory representations. Based on these findings we hypothesized that such memory related modulations of gamma activity exist in the auditory modality, as well. Thus, we chose environmental sounds for which subjects already had a long-term memory (LTM) representation and compared them to unknown, but physically similar sounds. 21 subjects had to classify sounds as 'recognized' or 'unrecognized', while EEG was recorded. Our data show significantly stronger activity in the induced gamma-band for recognized sounds in the time window between 300 and 500 ms after stimulus onset with a central topography. The results suggest that induced gamma-band activity reflects the matches between sounds and their representations in auditory LTM.

  5. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  6. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  7. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  8. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... relation between frequency and travel time in the cochlea defines the cochlear delay. This delay is directly associated with the signal analysis occurring in the inner ear and is therefore of primary interest to get a better knowledge of this organ. It is possible to estimate the cochlear delay by direct...... and invasive techniques, but these disrupt the normal functioning of the cochlea and are usually conducted in animals. In order to obtain an estimate of the cochlear delay that is closer to the normally functioning human cochlea, the present project investigates non-invasive methods in normal hearing adults...

  9. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  10. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  11. Frequency-Selective Attention in Auditory Scenes Recruits Frequency Representations Throughout Human Superior Temporal Cortex.

    Science.gov (United States)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2017-05-01

    A sound of interest may be tracked amid other salient sounds by focusing attention on its characteristic features including its frequency. Functional magnetic resonance imaging findings have indicated that frequency representations in human primary auditory cortex (AC) contribute to this feat. However, attentional modulations were examined at relatively low spatial and spectral resolutions, and frequency-selective contributions outside the primary AC could not be established. To address these issues, we compared blood oxygenation level-dependent (BOLD) responses in the superior temporal cortex of human listeners while they identified single frequencies versus listened selectively for various frequencies within a multifrequency scene. Using best-frequency mapping, we observed that the detailed spatial layout of attention-induced BOLD response enhancements in primary AC follows the tonotopy of stimulus-driven frequency representations-analogous to the "spotlight" of attention enhancing visuospatial representations in retinotopic visual cortex. Moreover, using an algorithm trained to discriminate stimulus-driven frequency representations, we could successfully decode the focus of frequency-selective attention from listeners' BOLD response patterns in nonprimary AC. Our results indicate that the human brain facilitates selective listening to a frequency of interest in a scene by reinforcing the fine-grained activity pattern throughout the entire superior temporal cortex that would be evoked if that frequency was present alone. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Brain stem auditory potentials evoked by clicks in the presence of high-pass filtered noise in dogs.

    Science.gov (United States)

    Poncelet, L; Deltenre, P; Coppens, A; Michaux, C; Coussart, E

    2006-04-01

    This study evaluates the effects of a high-frequency hearing loss simulated by the high-pass-noise masking method, on the click-evoked brain stem-evoked potentials (BAEP) characteristics in dogs. BAEP were obtained in response to rarefaction and condensation click stimuli from 60 dB normal hearing level (NHL, corresponding to 89 dB sound pressure level) to wave V threshold, using steps of 5 dB in eleven 58 to 80-day-old Beagle puppies. Responses were added, providing an equivalent to alternate polarity clicks, and subtracted, providing the rarefaction-condensation potential (RCDP). The procedure was repeated while constant level, high-pass filtered (HPF) noise was superposed to the click. Cut-off frequencies of the successively used filters were 8, 4, 2 and 1 kHz. For each condition, wave V and RCDP thresholds, and slope of the wave V latency-intensity curve (LIC) were collected. The intensity range at which RCDP could not be recorded (pre-RCDP range) was calculated. Compared with the no noise condition, the pre-RCDP range significantly diminished and the wave V threshold significantly increased when the superposed HPF noise reached the 4 kHz area. Wave V LIC slope became significantly steeper with the 2 kHz HPF noise. In this non-invasive model of high-frequency hearing loss, impaired hearing of frequencies from 8 kHz and above escaped detection through click BAEP study in dogs. Frequencies above 13 kHz were however not specifically addressed in this study.

  13. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  14. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    Science.gov (United States)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  15. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system

  16. Event-related brain potential correlates of human auditory sensory memory-trace formation.

    Science.gov (United States)

    Haenschel, Corinna; Vernon, David J; Dwivedi, Prabuddh; Gruzelier, John H; Baldeweg, Torsten

    2005-11-09

    The event-related potential (ERP) component mismatch negativity (MMN) is a neural marker of human echoic memory. MMN is elicited by deviant sounds embedded in a stream of frequent standards, reflecting the deviation from an inferred memory trace of the standard stimulus. The strength of this memory trace is thought to be proportional to the number of repetitions of the standard tone, visible as the progressive enhancement of MMN with number of repetitions (MMN memory-trace effect). However, no direct ERP correlates of the formation of echoic memory traces are currently known. This study set out to investigate changes in ERPs to different numbers of repetitions of standards, delivered in a roving-stimulus paradigm in which the frequency of the standard stimulus changed randomly between stimulus trains. Normal healthy volunteers (n = 40) were engaged in two experimental conditions: during passive listening and while actively discriminating changes in tone frequency. As predicted, MMN increased with increasing number of standards. However, this MMN memory-trace effect was caused mainly by enhancement with stimulus repetition of a slow positive wave from 50 to 250 ms poststimulus in the standard ERP, which is termed here "repetition positivity" (RP). This RP was recorded from frontocentral electrodes when participants were passively listening to or actively discriminating changes in tone frequency. RP may represent a human ERP correlate of rapid and stimulus-specific adaptation, a candidate neuronal mechanism underlying sensory memory formation in the auditory cortex.

  17. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks.

    Science.gov (United States)

    Harinen, Kirsi; Rinne, Teemu

    2013-08-15

    We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Real-time classification of auditory sentences using evoked cortical activity in humans

    Science.gov (United States)

    Moses, David A.; Leonard, Matthew K.; Chang, Edward F.

    2018-06-01

    Objective. Recent research has characterized the anatomical and functional basis of speech perception in the human auditory cortex. These advances have made it possible to decode speech information from activity in brain regions like the superior temporal gyrus, but no published work has demonstrated this ability in real-time, which is necessary for neuroprosthetic brain-computer interfaces. Approach. Here, we introduce a real-time neural speech recognition (rtNSR) software package, which was used to classify spoken input from high-resolution electrocorticography signals in real-time. We tested the system with two human subjects implanted with electrode arrays over the lateral brain surface. Subjects listened to multiple repetitions of ten sentences, and rtNSR classified what was heard in real-time from neural activity patterns using direct sentence-level and HMM-based phoneme-level classification schemes. Main results. We observed single-trial sentence classification accuracies of 90% or higher for each subject with less than 7 minutes of training data, demonstrating the ability of rtNSR to use cortical recordings to perform accurate real-time speech decoding in a limited vocabulary setting. Significance. Further development and testing of the package with different speech paradigms could influence the design of future speech neuroprosthetic applications.

  19. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review

    Directory of Open Access Journals (Sweden)

    Marie-Josée Castellanos

    2016-12-01

    Full Text Available Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  20. Human visual modeling and image deconvolution by linear filtering

    International Nuclear Information System (INIS)

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  1. Recovery function of the human brain stem auditory-evoked potential.

    Science.gov (United States)

    Kevanishvili, Z; Lagidze, Z

    1979-01-01

    Amplitude reduction and peak latency prolongation were observed in the human brain stem auditory-evoked potential (BEP) with preceding (conditioning) stimulation. At a conditioning interval (CI) of 5 ms the alteration of BEP was greater than at a CI of 10 ms. At a CI of 10 ms the amplitudes of some BEP components (e.g. waves I and II) were more decreased than those of others (e.g. wave V), while the peak latency prolongation did not show any obvious component selectivity. At a CI of 5 ms, the extent of the amplitude decrement of individual BEP components differed less, while the increase in the peak latencies of the later components was greater than that of the earlier components. The alterations of the parameters of the test BEPs at both CIs are ascribed to the desynchronization of intrinsic neural events. The differential amplitude reduction at a CI of 10 ms is explained by the different durations of neural firings determining various effects of desynchronization upon the amplitudes of individual BEP components. The decrease in the extent of the component selectivity and the preferential increase in the peak latencies of the later BEP components observed at a CI of 5 ms are explained by the intensification of the mechanism of the relative refractory period.

  2. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Frequency-specific attentional modulation in human primary auditory cortex and midbrain

    NARCIS (Netherlands)

    Riecke, Lars; Peters, Judith C; Valente, Giancarlo; Poser, Benedikt A; Kemper, Valentin G; Formisano, Elia; Sorger, Bettina

    2018-01-01

    Paying selective attention to an audio frequency selectively enhances activity within primary auditory cortex (PAC) at the tonotopic site (frequency channel) representing that frequency. Animal PAC neurons achieve this 'frequency-specific attentional spotlight' by adapting their frequency tuning,

  4. EDC IMPACT: Chemical UV filters can affect human sperm function in a progesterone-like manner

    Directory of Open Access Journals (Sweden)

    A Rehfeld

    2017-12-01

    Full Text Available Human sperm cell function must be precisely regulated to achieve natural fertilization. Progesterone released by the cumulus cells surrounding the egg induces a Ca2+ influx into human sperm cells via the CatSper Ca2+-channel and thereby controls sperm function. Multiple chemical UV filters have been shown to induce a Ca2+ influx through CatSper, thus mimicking the effect of progesterone on Ca2+ signaling. We hypothesized that these UV filters could also mimic the effect of progesterone on sperm function. We examined 29 UV filters allowed in sunscreens in the US and/or EU for their ability to affect acrosome reaction, penetration, hyperactivation and viability in human sperm cells. We found that, similar to progesterone, the UV filters 4-MBC, 3-BC, Meradimate, Octisalate, BCSA, HMS and OD-PABA induced acrosome reaction and 3-BC increased sperm penetration into a viscous medium. The capacity of the UV filters to induce acrosome reaction and increase sperm penetration was positively associated with the ability of the UV filters to induce a Ca2+ influx. None of the UV filters induced significant changes in the proportion of hyperactivated cells. In conclusion, chemical UV filters that mimic the effect of progesterone on Ca2+ signaling in human sperm cells can similarly mimic the effect of progesterone on acrosome reaction and sperm penetration. Human exposure to these chemical UV filters may impair fertility by interfering with sperm function, e.g. through induction of premature acrosome reaction. Further studies are needed to confirm the results in vivo.

  5. EDC IMPACT: Chemical UV filters can affect human sperm function in a progesterone-like manner.

    Science.gov (United States)

    Rehfeld, A; Egeberg, D L; Almstrup, K; Petersen, J H; Dissing, S; Skakkebæk, N E

    2018-01-01

    Human sperm cell function must be precisely regulated to achieve natural fertilization. Progesterone released by the cumulus cells surrounding the egg induces a Ca 2+ influx into human sperm cells via the CatSper Ca 2+ -channel and thereby controls sperm function. Multiple chemical UV filters have been shown to induce a Ca 2+ influx through CatSper, thus mimicking the effect of progesterone on Ca 2+ signaling. We hypothesized that these UV filters could also mimic the effect of progesterone on sperm function. We examined 29 UV filters allowed in sunscreens in the US and/or EU for their ability to affect acrosome reaction, penetration, hyperactivation and viability in human sperm cells. We found that, similar to progesterone, the UV filters 4-MBC, 3-BC, Meradimate, Octisalate, BCSA, HMS and OD-PABA induced acrosome reaction and 3-BC increased sperm penetration into a viscous medium. The capacity of the UV filters to induce acrosome reaction and increase sperm penetration was positively associated with the ability of the UV filters to induce a Ca 2+ influx. None of the UV filters induced significant changes in the proportion of hyperactivated cells. In conclusion, chemical UV filters that mimic the effect of progesterone on Ca 2+ signaling in human sperm cells can similarly mimic the effect of progesterone on acrosome reaction and sperm penetration. Human exposure to these chemical UV filters may impair fertility by interfering with sperm function, e.g. through induction of premature acrosome reaction. Further studies are needed to confirm the results in vivo . © 2018 The authors.

  6. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  7. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  8. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  9. Acute stress alters auditory selective attention in humans independent of HPA: a study of evoked potentials.

    Directory of Open Access Journals (Sweden)

    Ludger Elling

    Full Text Available BACKGROUND: Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM. However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such "paracorticoidal" stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. METHODOLOGY/PRINCIPAL FINDINGS: The stressor consisted of a single cold pressor test. Auditory negative difference (Nd and mismatch negativity (MMN were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occurring 4-7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8-11 minutes after onset when no further modulations in the event-related potentials (ERP occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. CONCLUSIONS/SIGNIFICANCE: Prior studies have deliberately tracked the adrenocortical influence

  10. Monaural and binaural contributions to interaural-level-difference sensitivity in human auditory cortex.

    Science.gov (United States)

    Stecker, G Christopher; McLaughlin, Susan A; Higgins, Nathan C

    2015-10-15

    Whole-brain functional magnetic resonance imaging was used to measure blood-oxygenation-level-dependent (BOLD) responses in human auditory cortex (AC) to sounds with intensity varying independently in the left and right ears. Echoplanar images were acquired at 3 Tesla with sparse image acquisition once per 12-second block of sound stimulation. Combinations of binaural intensity and stimulus presentation rate were varied between blocks, and selected to allow measurement of response-intensity functions in three configurations: monaural 55-85 dB SPL, binaural 55-85 dB SPL with intensity equal in both ears, and binaural with average binaural level of 70 dB SPL and interaural level differences (ILD) ranging ±30 dB (i.e., favoring the left or right ear). Comparison of response functions equated for contralateral intensity revealed that BOLD-response magnitudes (1) generally increased with contralateral intensity, consistent with positive drive of the BOLD response by the contralateral ear, (2) were larger for contralateral monaural stimulation than for binaural stimulation, consistent with negative effects (e.g., inhibition) of ipsilateral input, which were strongest in the left hemisphere, and (3) also increased with ipsilateral intensity when contralateral input was weak, consistent with additional, positive, effects of ipsilateral stimulation. Hemispheric asymmetries in the spatial extent and overall magnitude of BOLD responses were generally consistent with previous studies demonstrating greater bilaterality of responses in the right hemisphere and stricter contralaterality in the left hemisphere. Finally, comparison of responses to fast (40/s) and slow (5/s) stimulus presentation rates revealed significant rate-dependent adaptation of the BOLD response that varied across ILD values. Copyright © 2015. Published by Elsevier Inc.

  11. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  12. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  13. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    Science.gov (United States)

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  14. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  15. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    Science.gov (United States)

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  16. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory

  17. Motor-auditory-visual integration: The role of the human mirror neuron system in communication and communication disorders.

    Science.gov (United States)

    Le Bel, Ronald M; Pineda, Jaime A; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an integration of motor-auditory-visual information processing related to aspects of language learning including action understanding and recognition. Such integration may also form the basis for language-related constructs such as theory of mind. In this article, we review the MNS system as it relates to the cognitive development of language in typically developing children and in children at-risk for communication disorders, such as children with autism spectrum disorder (ASD) or hearing impairment. Studying MNS development in these children may help illuminate an important role of the MNS in children with communication disorders. Studies with deaf children are especially important because they offer potential insights into how the MNS is reorganized when one modality, such as audition, is deprived during early cognitive development, and this may have long-term consequences on language maturation and theory of mind abilities. Readers will be able to (1) understand the concept of mirror neurons, (2) identify cortical areas associated with the MNS in animal and human studies, (3) discuss the use of mu suppression in the EEG for measuring the MNS in humans, and (4) discuss MNS dysfunction in children with (ASD).

  18. The effect of 50/60 Hz notch filter application on human and rat ECG recordings

    International Nuclear Information System (INIS)

    Vale-Cardoso, A S; Guimarães, H N

    2010-01-01

    Power-line interference is always present in indoor biopotential measurements, even when its extremely low magnitude makes it imperceptible. In special situations, this kind of interference can be neglected, but this is not a general rule. In laboratory experiments and clinical analysis, it is hard (and expensive) to isolate the subject of measurement from electrical fields produced by a power line. In human biopotential recordings, it is common practice to apply a 50/60 Hz notch filter to reduce this kind of interference. In such cases, there is no considerable distortion observed on the recorded signal. However, experiments showed that it is not true for rat ECG recordings. Several kinds of notch filters (analog and digital) were implemented for evaluation of the distortion caused on ECG signals. These filters were applied to ECGs of humans and rats and then distortion estimates were computed from their resulting signals. The comparison of these estimates showed that, as experimentally observed, rat ECG signals are significantly distorted and deformed when a 50/60 Hz notch filter is applied to them, while human ECGs are not. The major goal of this paper is to show that the use of a notch filter for power-line interference rejection, when applied to rat ECG recordings, can severely deform the QRS complex of such signals, warning researchers against its non-deliberate usage

  19. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS

    Directory of Open Access Journals (Sweden)

    Paul Fredrick Sowman

    2014-06-01

    Full Text Available Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS -induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  20. Benzodiazepine temazepam suppresses the transient auditory 40-Hz response amplitude in humans.

    Science.gov (United States)

    Jääskeläinen, I P; Hirvonen, J; Saher, M; Pekkonen, E; Sillanaukee, P; Näätänen, R; Tiitinen, H

    1999-06-18

    To discern the role of the GABA(A) receptors in the generation and attentive modulation of the transient auditory 40-Hz response, the effects of the benzodiazepine temazepam (10 mg) were studied in 10 healthy social drinkers, using a double-blind placebo-controlled design. Three hundred Hertz standard and 330 Hz rare deviant tones were presented to the left, and 1000 Hz standards and 1100 Hz deviants to the right ear of the subjects. Subjects attended to a designated ear and were to detect deviants therein while ignoring tones to the other. Temazepam significantly suppressed the amplitude of the 40-Hz response, the effect being equal for attended and non-attended tone responses. This suggests involvement of GABA(A) receptors in transient auditory 40-Hz response generation, however, not in the attentive modulation of the 40-Hz response.

  1. Non-linear laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-01-01

    Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two to...

  2. Contralateral white noise selectively changes left human auditory cortex activity in a lexical decision task.

    Science.gov (United States)

    Behne, Nicole; Wendt, Beate; Scheich, Henning; Brechmann, André

    2006-04-01

    In a previous study, we hypothesized that the approach of presenting information-bearing stimuli to one ear and noise to the other ear may be a general strategy to determine hemispheric specialization in auditory cortex (AC). In that study, we confirmed the dominant role of the right AC in directional categorization of frequency modulations by showing that fMRI activation of right but not left AC was sharply emphasized when masking noise was presented to the contralateral ear. Here, we tested this hypothesis using a lexical decision task supposed to be mainly processed in the left hemisphere. Subjects had to distinguish between pseudowords and natural words presented monaurally to the left or right ear either with or without white noise to the other ear. According to our hypothesis, we expected a strong effect of contralateral noise on fMRI activity in left AC. For the control conditions without noise, we found that activation in both auditory cortices was stronger on contralateral than on ipsilateral word stimulation consistent with a more influential contralateral than ipsilateral auditory pathway. Additional presentation of contralateral noise did not significantly change activation in right AC, whereas it led to a significant increase of activation in left AC compared with the condition without noise. This is consistent with a left hemispheric specialization for lexical decisions. Thus our results support the hypothesis that activation by ipsilateral information-bearing stimuli is upregulated mainly in the hemisphere specialized for a given task when noise is presented to the more influential contralateral ear.

  3. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  4. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  5. Chemical UV Filters Mimic the Effect of Progesterone on Ca(2+) Signaling in Human Sperm Cells

    DEFF Research Database (Denmark)

    Rehfeld, A; Dissing, S; Skakkebæk, N E

    2016-01-01

    Progesterone released by cumulus cells surrounding the egg induces a Ca(2+) influx into human sperm cells via the cationic channel of sperm (CatSper) Ca(2+) channel and controls multiple Ca(2+)-dependent responses essential for fertilization. We hypothesized that chemical UV filters may mimic...

  6. Organic UV filters exposure induces the production of inflammatory cytokines in human macrophages.

    Science.gov (United States)

    Ao, Junjie; Yuan, Tao; Gao, Li; Yu, Xiaodan; Zhao, Xiaodong; Tian, Ying; Ding, Wenjin; Ma, Yuning; Shen, Zhemin

    2018-09-01

    Organic ultraviolet (UV) filters, found in many personal care products, are considered emerging contaminants due to growing concerns about potential long-term deleterious effects. We investigated the immunomodulatory effects of four commonly used organic UV filters (2-hydroxy-4-methoxybenzophenone, BP-3; 4-methylbenzylidene camphor, 4-MBC; 2-ethylhexyl 4-methoxycinnamate, EHMC; and butyl-methoxydibenzoylmethane, BDM) on human macrophages. Our results indicated that exposure to these four UV filters significantly increased the production of various inflammatory cytokines in macrophages, particular tumor necrosis factor-α (TNF-α) and interleukin-6 (IL-6). After exposure to the UV filters, a significant 1.1-1.5 fold increase were found in TNF-α and IL-6 mRNA expression. In addition, both the p38 MAPK and the NF-κB signaling pathways were enhanced 2 to 10 times in terms of phosphorylation after exposure to the UV filters, suggesting that these pathways are involved in the release of TNF-α and IL-6. Molecular docking analysis predicted that all four UV filter molecules would efficiently bind transforming growth factor beta-activated kinase 1 (TAK1), which is responsible for the activation of the p38 MAPK and NF-κB pathways. Our results therefore demonstrate that exposure to the four organic UV filters investigated may alter human immune system function. It provides new clue for the development of asthma or allergic diseases in terms of the environmental pollutants. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  8. Using Gaussian Process Annealing Particle Filter for 3D Human Tracking

    Directory of Open Access Journals (Sweden)

    Michael Rudzsky

    2008-01-01

    Full Text Available We present an approach for human body parts tracking in 3D with prelearned motion models using multiple cameras. Gaussian process annealing particle filter is proposed for tracking in order to reduce the dimensionality of the problem and to increase the tracker's stability and robustness. Comparing with a regular annealed particle filter-based tracker, we show that our algorithm can track better for low frame rate videos. We also show that our algorithm is capable of recovering after a temporal target loss.

  9. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2017-08-15

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Logarithmic laws of echoic memory and auditory change detection in humans

    OpenAIRE

    Koji Inui; Tomokazu Urakawa; Koya Yamashiro; Naofumi Otsuru; Yasuyuki Takeshima; Ryusuke Kakigi

    2009-01-01

    The cortical mechanisms underlying echoic memory and change detection were investigated using an auditory change-related component (N100c) of event-related brain potentials. N100c was elicited by paired sound stimuli, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of N100c elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1 ~ 1000 ms), ...

  11. Estimation of Human Workload from the Auditory Steady-State Response Recorded via a Wearable Electroencephalography System during Walking

    Directory of Open Access Journals (Sweden)

    Yusuke Yokota

    2017-06-01

    Full Text Available Workload in the human brain can be a useful marker of internal brain state. However, due to technical limitations, previous workload studies have been unable to record brain activity via conventional electroencephalography (EEG and magnetoencephalography (MEG devices in mobile participants. In this study, we used a wearable EEG system to estimate workload while participants walked in a naturalistic environment. Specifically, we used the auditory steady-state response (ASSR which is an oscillatory brain activity evoked by repetitive auditory stimuli, as an estimation index of workload. Participants performed three types of N-back tasks, which were expected to command different workloads, while walking at a constant speed. We used a binaural 500 Hz pure tone with amplitude modulation at 40 Hz to evoke the ASSR. We found that the phase-locking index (PLI of ASSR activity was significantly correlated with the degree of task difficulty, even for EEG data from few electrodes. Thus, ASSR appears to be an effective indicator of workload during walking in an ecologically valid environment.

  12. A Method for Designing FIR Filters with Arbitrary Magnitude Characteristic Used for Modeling Human Audiogram

    Directory of Open Access Journals (Sweden)

    SZOPOS, E.

    2012-05-01

    Full Text Available This paper presents an iterative method for designing FIR filters that implement arbitrary magnitude characteristics, defined by the user through a set of frequency-magnitude points (frequency samples. The proposed method is based on the non-uniform frequency sampling algorithm. For each iteration a new set of frequency samples is generated, by processing the set used in the previous run; this implies changing the samples location around the previous frequency values and adjusting their magnitude through interpolation. If necessary, additional samples can be introduced, as well. After each iteration the magnitude characteristic of the resulting filter is determined by using the non-uniform DFT and compared with the required one; if the errors are larger than the acceptable levels (set by the user a new iteration is run; the length of the resulting filter and the values of its coefficients are also taken into consideration when deciding a re-run. To demonstrate the efficiency of the proposed method a tool for designing FIR filters that match human audiograms was implemented in LabVIEW. It was shown that the resulting filters have smaller coefficients than the standard one, and can also have lower order, while the errors remain relatively small.

  13. Sensitivity of human auditory cortex to rapid frequency modulation revealed by multivariate representational similarity analysis.

    Science.gov (United States)

    Joanisse, Marc F; DeSouza, Diedre D

    2014-01-01

    Functional Magnetic Resonance Imaging (fMRI) was used to investigate the extent, magnitude, and pattern of brain activity in response to rapid frequency-modulated sounds. We examined this by manipulating the direction (rise vs. fall) and the rate (fast vs. slow) of the apparent pitch of iterated rippled noise (IRN) bursts. Acoustic parameters were selected to capture features used in phoneme contrasts, however the stimuli themselves were not perceived as speech per se. Participants were scanned as they passively listened to sounds in an event-related paradigm. Univariate analyses revealed a greater level and extent of activation in bilateral auditory cortex in response to frequency-modulated sweeps compared to steady-state sounds. This effect was stronger in the left hemisphere. However, no regions showed selectivity for either rate or direction of frequency modulation. In contrast, multivoxel pattern analysis (MVPA) revealed feature-specific encoding for direction of modulation in auditory cortex bilaterally. Moreover, this effect was strongest when analyses were restricted to anatomical regions lying outside Heschl's gyrus. We found no support for feature-specific encoding of frequency modulation rate. Differential findings of modulation rate and direction of modulation are discussed with respect to their relevance to phonetic discrimination.

  14. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  15. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  16. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  17. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  18. Effects of first formant onset frequency on [-voice] judgments result from auditory processes not specific to humans.

    Science.gov (United States)

    Kluender, K R; Lotto, A J

    1994-02-01

    When F1-onset frequency is lower, longer F1 cut-back (VOT) is required for human listeners to perceive synthesized stop consonants as voiceless. K. R. Kluender [J. Acoust. Soc. Am. 90, 83-96 (1991)] found comparable effects of F1-onset frequency on the "labeling" of stop consonants by Japanese quail (coturnix coturnix japonica) trained to distinguish stop consonants varying in F1 cut-back. In that study, CVs were synthesized with natural-like rising F1 transitions, and endpoint training stimuli differed in the onset frequency of F1 because a longer cut-back resulted in a higher F1 onset. In order to assess whether earlier results were due to auditory predispositions or due to animals having learned the natural covariance between F1 cut-back and F1-onset frequency, the present experiment was conducted with synthetic continua having either a relatively low (375 Hz) or high (750 Hz) constant-frequency F1. Six birds were trained to respond differentially to endpoint stimuli from three series of synthesized /CV/s varying in duration of F1 cut-back. Second and third formant transitions were appropriate for labial, alveolar, or velar stops. Despite the fact that there was no opportunity for animal subjects to use experienced covariation of F1-onset frequency and F1 cut-back, quail typically exhibited shorter labeling boundaries (more voiceless stops) for intermediate stimuli of the continua when F1 frequency was higher. Responses by human subjects listening to the same stimuli were also collected. Results lend support to the earlier conclusion that part or all of the effect of F1 onset frequency on perception of voicing may be adequately explained by general auditory processes.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. The effects of aging on lifetime of auditory sensory memory in humans.

    Science.gov (United States)

    Cheng, Chia-Hsiung; Lin, Yung-Yang

    2012-02-01

    The amplitude change of cortical responses to repeated stimulation with respect to different interstimulus intervals (ISIs) is considered as an index of sensory memory. To determine the effect of aging on lifetime of auditory sensory memory, N100m responses were recorded in young, middle-aged, and elderly healthy volunteers (n=15 for each group). Trains of 5 successive tones were presented with an inter-train interval of 10 s. In separate sessions, the within-train ISIs were 0.5, 1, 2, 4, and 8 s. The amplitude ratio between N100m responses to the first and fifth stimuli (S5/S1 N100m ratio) within each ISI condition was obtained to reflect the recovery cycle profile. The recovery function time constant (τ) was smaller in the elderly (1.06±0.26 s, psensory memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  1. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  2. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  3. Noninvasive mapping of water diffusional exchange in the human brain using filter-exchange imaging.

    Science.gov (United States)

    Nilsson, Markus; Lätt, Jimmy; van Westen, Danielle; Brockstedt, Sara; Lasič, Samo; Ståhlberg, Freddy; Topgaard, Daniel

    2013-06-01

    We present the first in vivo application of the filter-exchange imaging protocol for diffusion MRI. The protocol allows noninvasive mapping of the rate of water exchange between microenvironments with different self-diffusivities, such as the intracellular and extracellular spaces in tissue. Since diffusional water exchange across the cell membrane is a fundamental process in human physiology and pathophysiology, clinically feasible and noninvasive imaging of the water exchange rate would offer new means to diagnose disease and monitor treatment response in conditions such as cancer and edema. The in vivo use of filter-exchange imaging was demonstrated by studying the brain of five healthy volunteers and one intracranial tumor (meningioma). Apparent exchange rates in white matter range from 0.8±0.08 s(-1) in the internal capsule, to 1.6±0.11 s(-1) for frontal white matter, indicating that low values are associated with high myelination. Solid tumor displayed values of up to 2.9±0.8 s(-1). In white matter, the apparent exchange rate values suggest intra-axonal exchange times in the order of seconds, confirming the slow exchange assumption in the analysis of diffusion MRI data. We propose that filter-exchange imaging could be used clinically to map the water exchange rate in pathologies. Filter-exchange imaging may also be valuable for evaluating novel therapies targeting the function of aquaporins. Copyright © 2012 Wiley Periodicals, Inc.

  4. Acquisition, Analyses and Interpretation of fMRI Data: A Study on the Effective Connectivity in Human Primary Auditory Cortices

    International Nuclear Information System (INIS)

    Ahmad Nazlim Yusoff; Mazlyfarina Mohamad; Khairiah Abdul Hamid

    2011-01-01

    A study on the effective connectivity characteristics in auditory cortices was conducted on five healthy Malay male subjects with the age of 20 to 40 years old using functional magnetic resonance imaging (fMRI), statistical parametric mapping (SPM5) and dynamic causal modelling (DCM). A silent imaging paradigm was used to reduce the scanner sound artefacts on functional images. The subjects were instructed to pay attention to the white noise stimulus binaurally given at intensity level of 70 dB higher than the hearing level for normal people. Functional specialisation was studied using Matlab-based SPM5 software by means of fixed effects (FFX), random effects (RFX) and conjunction analyses. Individual analyses on all subjects indicate asymmetrical bilateral activation between the left and right auditory cortices in Brodmann areas (BA)22, 41 and 42 involving the primary and secondary auditory cortices. The three auditory areas in the right and left auditory cortices are selected for the determination of the effective connectivity by constructing 9 network models. The effective connectivity is determined on four out of five subjects with the exception of one subject who has the BA22 coordinates located too far from BA22 coordinates obtained from group analysis. DCM results showed the existence of effective connectivity between the three selected auditory areas in both auditory cortices. In the right auditory cortex, BA42 is identified as input centre with unidirectional parallel effective connectivities of BA42→BA41and BA42→BA22. However, for the left auditory cortex, the input is BA41 with unidirectional parallel effective connectivities of BA41→BA42 and BA41→BA22. The connectivity between the activated auditory areas suggests the existence of signal pathway in the auditory cortices even when the subject is listening to noise. (author)

  5. A Novel Kalman Filter for Human Motion Tracking With an Inertial-Based Dynamic Inclinometer.

    Science.gov (United States)

    Ligorio, Gabriele; Sabatini, Angelo M

    2015-08-01

    Design and development of a linear Kalman filter to create an inertial-based inclinometer targeted to dynamic conditions of motion. The estimation of the body attitude (i.e., the inclination with respect to the vertical) was treated as a source separation problem to discriminate the gravity and the body acceleration from the specific force measured by a triaxial accelerometer. The sensor fusion between triaxial gyroscope and triaxial accelerometer data was performed using a linear Kalman filter. Wrist-worn inertial measurement unit data from ten participants were acquired while performing two dynamic tasks: 60-s sequence of seven manual activities and 90 s of walking at natural speed. Stereophotogrammetric data were used as a reference. A statistical analysis was performed to assess the significance of the accuracy improvement over state-of-the-art approaches. The proposed method achieved, on an average, a root mean square attitude error of 3.6° and 1.8° in manual activities and locomotion tasks (respectively). The statistical analysis showed that, when compared to few competing methods, the proposed method improved the attitude estimation accuracy. A novel Kalman filter for inertial-based attitude estimation was presented in this study. A significant accuracy improvement was achieved over state-of-the-art approaches, due to a filter design that better matched the basic optimality assumptions of Kalman filtering. Human motion tracking is the main application field of the proposed method. Accurately discriminating the two components present in the triaxial accelerometer signal is well suited for studying both the rotational and the linear body kinematics.

  6. Non-linear laws of echoic memory and auditory change detection in humans.

    Science.gov (United States)

    Inui, Koji; Urakawa, Tomokazu; Yamashiro, Koya; Otsuru, Naofumi; Nishihara, Makoto; Takeshima, Yasuyuki; Keceli, Sumru; Kakigi, Ryusuke

    2010-07-03

    The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1) of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB) was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms), while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms). The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  7. Dissociable neural response signatures for slow amplitude and frequency modulation in human auditory cortex.

    Science.gov (United States)

    Henry, Molly J; Obleser, Jonas

    2013-01-01

    Natural auditory stimuli are characterized by slow fluctuations in amplitude and frequency. However, the degree to which the neural responses to slow amplitude modulation (AM) and frequency modulation (FM) are capable of conveying independent time-varying information, particularly with respect to speech communication, is unclear. In the current electroencephalography (EEG) study, participants listened to amplitude- and frequency-modulated narrow-band noises with a 3-Hz modulation rate, and the resulting neural responses were compared. Spectral analyses revealed similar spectral amplitude peaks for AM and FM at the stimulation frequency (3 Hz), but amplitude at the second harmonic frequency (6 Hz) was much higher for FM than for AM. Moreover, the phase delay of neural responses with respect to the full-band stimulus envelope was shorter for FM than for AM. Finally, the critical analysis involved classification of single trials as being in response to either AM or FM based on either phase or amplitude information. Time-varying phase, but not amplitude, was sufficient to accurately classify AM and FM stimuli based on single-trial neural responses. Taken together, the current results support the dissociable nature of cortical signatures of slow AM and FM. These cortical signatures potentially provide an efficient means to dissect simultaneously communicated slow temporal and spectral information in acoustic communication signals.

  8. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  9. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  10. GF-GC Theory of Human Cognition: Differentiation of Short-Term Auditory and Visual Memory Factors.

    Science.gov (United States)

    McGhee, Ron; Lieberman, Lewis

    1994-01-01

    Study sought to determine whether separate short-term auditory and visual memory factors would emerge given a sufficient number of markers in a factor matrix. A principal component factor analysis with varimax rotation was performed. Short-term visual and short-term auditory memory factors emerged as expected. (RJM)

  11. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth

    NARCIS (Netherlands)

    van den Heuvel, Marion I.; Otte, Renee A.; Braeken, Marijke A. K. A.; Winkler, Istvan; Kushnerenko, Elena; Van den Bergh, Bea R. H.

    2015-01-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was

  12. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  13. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  14. Optimizing Filter-Probe Diffusion Weighting in the Rat Spinal Cord for Human Translation

    Directory of Open Access Journals (Sweden)

    Matthew D. Budde

    2017-12-01

    Full Text Available Diffusion tensor imaging (DTI is a promising biomarker of spinal cord injury (SCI. In the acute aftermath, DTI in SCI animal models consistently demonstrates high sensitivity and prognostic performance, yet translation of DTI to acute human SCI has been limited. In addition to technical challenges, interpretation of the resulting metrics is ambiguous, with contributions in the acute setting from both axonal injury and edema. Novel diffusion MRI acquisition strategies such as double diffusion encoding (DDE have recently enabled detection of features not available with DTI or similar methods. In this work, we perform a systematic optimization of DDE using simulations and an in vivo rat model of SCI and subsequently implement the protocol to the healthy human spinal cord. First, two complementary DDE approaches were evaluated using an orientationally invariant or a filter-probe diffusion encoding approach. While the two methods were similar in their ability to detect acute SCI, the filter-probe DDE approach had greater predictive power for functional outcomes. Next, the filter-probe DDE was compared to an analogous single diffusion encoding (SDE approach, with the results indicating that in the spinal cord, SDE provides similar contrast with improved signal to noise. In the SCI rat model, the filter-probe SDE scheme was coupled with a reduced field of view (rFOV excitation, and the results demonstrate high quality maps of the spinal cord without contamination from edema and cerebrospinal fluid, thereby providing high sensitivity to injury severity. The optimized protocol was demonstrated in the healthy human spinal cord using the commercially-available diffusion MRI sequence with modifications only to the diffusion encoding directions. Maps of axial diffusivity devoid of CSF partial volume effects were obtained in a clinically feasible imaging time with a straightforward analysis and variability comparable to axial diffusivity derived from DTI

  15. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    Science.gov (United States)

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  16. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  17. Cortical pitch regions in humans respond primarily to resolved harmonics and are located in specific tonotopic regions of anterior auditory cortex.

    Science.gov (United States)

    Norman-Haignere, Sam; Kanwisher, Nancy; McDermott, Josh H

    2013-12-11

    Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.

  18. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  19. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  20. Generation of human auditory steady-state responses (SSRs). II: Addition of responses to individual stimuli.

    Science.gov (United States)

    Santarelli, R; Maurizi, M; Conti, G; Ottaviani, F; Paludetti, G; Pettorossi, V E

    1995-03-01

    In order to investigate the generation of the 40 Hz steady-state response (SSR), auditory potentials evoked by clicks were recorded in 16 healthy subjects in two stimulating conditions. Firstly, repetition rates of 7.9 and 40 Hz were used to obtain individual middle latency responses (MLRs) and 40 Hz-SSRs, respectively. In the second condition, eight click trains were presented at a 40 Hz repetition rate and an inter-train interval of 126 ms. We extracted from the whole train response: (1) the response-segment taking place after the last click of the train (last click response, LCR), (2) a modified LCR (mLCR) obtained by clearing the LCR from the amplitude enhancement due to the overlapping of the responses to the clicks preceding the last within the stimulus train. In comparison to MLRs, the most relevant feature of the evoked activity following the last click of the train (LCRs, mLCRs) was the appearance in the 50-110 ms latency range of one (in 11 subjects) or two (in 2 subjects) additional positive-negative deflections having the same periodicity as that of MLR waves. The grand average (GA) of the 40 Hz-SSRs was compared with three predictions synthesized by superimposing: (1) the GA of MLRs, (2) the GA of LCRs, (3) the GA of mLCRs. Both the MLR and mLCR predictions reproduced the recorded signal in amplitude while the LCR prediction amplitude resulted almost twice that of the 40 Hz-SSR. With regard to the phase, the MLR, LCR and mLCR closely predicted the recorded signal. Our findings confirm the effectiveness of the linear addition mechanism in the generation of the 40 Hz-SSR. However the responses to individual stimuli within the 40 Hz-SSR differ from MLRs because of additional periodic activity. These results suggest that phenomena related to the resonant frequency of the activated system may play a role in the mechanisms which interact to generate the 40 Hz-SSR.

  1. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  2. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    OpenAIRE

    Scott, Gregory D.; Karns, Christina M.; Dow, Mark W.; Stevens, Courtney; Neville, Helen J.

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants wer...

  3. Influence of age, spatial memory, and ocular fixation on localization of auditory, visual, and bimodal targets by human subjects.

    Science.gov (United States)

    Dobreva, Marina S; O'Neill, William E; Paige, Gary D

    2012-12-01

    A common complaint of the elderly is difficulty identifying and localizing auditory and visual sources, particularly in competing background noise. Spatial errors in the elderly may pose challenges and even threats to self and others during everyday activities, such as localizing sounds in a crowded room or driving in traffic. In this study, we investigated the influence of aging, spatial memory, and ocular fixation on the localization of auditory, visual, and combined auditory-visual (bimodal) targets. Head-restrained young and elderly subjects localized targets in a dark, echo-attenuated room using a manual laser pointer. Localization accuracy and precision (repeatability) were quantified for both ongoing and transient (remembered) targets at response delays up to 10 s. Because eye movements bias auditory spatial perception, localization was assessed under target fixation (eyes free, pointer guided by foveal vision) and central fixation (eyes fixed straight ahead, pointer guided by peripheral vision) conditions. Spatial localization across the frontal field in young adults demonstrated (1) horizontal overshoot and vertical undershoot for ongoing auditory targets under target fixation conditions, but near-ideal horizontal localization with central fixation; (2) accurate and precise localization of ongoing visual targets guided by foveal vision under target fixation that degraded when guided by peripheral vision during central fixation; (3) overestimation in horizontal central space (±10°) of remembered auditory, visual, and bimodal targets with increasing response delay. In comparison with young adults, elderly subjects showed (1) worse precision in most paradigms, especially when localizing with peripheral vision under central fixation; (2) greatly impaired vertical localization of auditory and bimodal targets; (3) increased horizontal overshoot in the central field for remembered visual and bimodal targets across response delays; (4) greater vulnerability to

  4. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  5. Neuromagnetic Representation of Musical Register Information in HumaN Auditory Cortex

    NARCIS (Netherlands)

    Andermann, M.; Van Dinther, C.H.B.A.; Patterson, R.D.; Rupp, A.

    2011-01-01

    Pulse-resonance sounds like vowels or instrumental tones contain acoustic information about the physical size of the sound source (pulse rate) and body resonators (resonance scale). Previous research has revealed correlates of these variables in humans using functional neuroimaging. Here, we report

  6. BDNF Increases Survival and Neuronal Differentiation of Human Neural Precursor Cells Cotransplanted with a Nanofiber Gel to the Auditory Nerve in a Rat Model of Neuronal Damage

    Directory of Open Access Journals (Sweden)

    Yu Jiao

    2014-01-01

    Full Text Available Objectives. To study possible nerve regeneration of a damaged auditory nerve by the use of stem cell transplantation. Methods. We transplanted HNPCs to the rat AN trunk by the internal auditory meatus (IAM. Furthermore, we studied if addition of BDNF affects survival and phenotypic differentiation of the grafted HNPCs. A bioactive nanofiber gel (PA gel, in selected groups mixed with BDNF, was applied close to the implanted cells. Before transplantation, all rats had been deafened by a round window niche application of β-bungarotoxin. This neurotoxin causes a selective toxic destruction of the AN while keeping the hair cells intact. Results. Overall, HNPCs survived well for up to six weeks in all groups. However, transplants receiving the BDNF-containing PA gel demonstrated significantly higher numbers of HNPCs and neuronal differentiation. At six weeks, a majority of the HNPCs had migrated into the brain stem and differentiated. Differentiated human cells as well as neurites were observed in the vicinity of the cochlear nucleus. Conclusion. Our results indicate that human neural precursor cells (HNPC integration with host tissue benefits from additional brain derived neurotrophic factor (BDNF treatment and that these cells appear to be good candidates for further regenerative studies on the auditory nerve (AN.

  7. Towards Clinical Application of Neurotrophic Factors to the Auditory Nerve; Assessment of Safety and Efficacy by a Systematic Review of Neurotrophic Treatments in Humans

    Directory of Open Access Journals (Sweden)

    Aren Bezdjian

    2016-11-01

    Full Text Available Animal studies have evidenced protection of the auditory nerve by exogenous neurotrophic factors. In order to assess clinical applicability of neurotrophic treatment of the auditory nerve, the safety and efficacy of neurotrophic therapies in various human disorders were systematically reviewed. Outcomes of our literature search included disorder, neurotrophic factor, administration route, therapeutic outcome, and adverse event. From 2103 articles retrieved, 20 randomized controlled trials including 3974 patients were selected. Amyotrophic lateral sclerosis (53% was the most frequently reported indication for neurotrophic therapy followed by diabetic polyneuropathy (28%. Ciliary neurotrophic factor (50%, nerve growth factor (24% and insulin-like growth factor (21% were most often used. Injection site reaction was a frequently occurring adverse event (61% followed by asthenia (24% and gastrointestinal disturbances (20%. Eighteen out of 20 trials deemed neurotrophic therapy to be safe, and six out of 17 studies concluded the neurotrophic therapy to be effective. Positive outcomes were generally small or contradicted by other studies. Most non-neurodegenerative diseases treated by targeted deliveries of neurotrophic factors were considered safe and effective. Hence, since local delivery to the cochlea is feasible, translation from animal studies to human trials in treating auditory nerve degeneration seems promising.

  8. Classification of underwater target echoes based on auditory perception characteristics

    Science.gov (United States)

    Li, Xiukun; Meng, Xiangxia; Liu, Hang; Liu, Mingye

    2014-06-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  9. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  10. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  11. A Wearable-Based and Markerless Human-Manipulator Interface with Feedback Mechanism and Kalman Filters

    Directory of Open Access Journals (Sweden)

    Ping Zhang

    2015-11-01

    Full Text Available The objective of this paper is to develop a novel human-manipulator interface which incorporates wearable-based and markerless tracking to interact with the continuous movements of a human operator's hand. Unlike traditional approaches, which usually include contacting devices or physical markers to track the human-limb movements, this interface enables registration of natural movement through a wireless wearable watch and a leap motion sensor. Due to sensor error and tracking failure, the measurements are not made with sufficient accuracy. Two Kalman filters are employed to compensate the noisy and incomplete measurements in real time. Furthermore, due to perceptive limitations and abnormal state signals, the operator is unable to achieve high precision and efficiency in robot manipulation; an adaptive multispace transformation method (AMT is therefore introduced, which serves as a secondary treatment. In addition, in order to allow two-way human-robot interaction, the proposed method provides a vibration feedback mechanism triggered by the wearable watch to call the operator's attention to robot collision incidents or moments where the operator's hand is in a transboundary state. This improves teleoperation.

  12. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  13. Human event-related brain potentials to auditory periodic noise stimuli.

    Science.gov (United States)

    Kaernbach, C; Schröger, E; Gunter, T C

    1998-02-06

    Periodic noise is perceived as different from ordinary non-repeating noise due to the involvement of echoic memory. Since this stimulus does not contain simple physical cues (such as onsets or spectral shape) that might obscure sensory memory interpretations, it is a valuable tool to study sensory memory functions. We demonstrated for the first time that the processing of periodic noise can be tapped by event-related brain potentials (ERPs). Human subjects received repeating segments of noise embedded in non-repeating noise. They were instructed to detect the periodicity inherent to the stimulation. We observed a central negativity time-locked on the periodic segment that correlated to the subjects behavioral performance in periodicity detection. It is argued that the ERP result indicates an enhancement of sensory-specific processing.

  14. Collecting Protein Biomarkers in Breath Using Electret Filters: A Preliminary Method on New Technical Model and Human Study.

    Directory of Open Access Journals (Sweden)

    Wang Li

    Full Text Available Biomarkers in exhaled breath are useful for respiratory disease diagnosis in human volunteers. Conventional methods that collect non-volatile biomarkers, however, necessitate an extensive dilution and sanitation processes that lowers collection efficiencies and convenience of use. Electret filter emerged in recent decade to collect virus biomarkers in exhaled breath given its simplicity and effectiveness. To investigate the capability of electret filters to collect protein biomarkers, a model that consists of an atomizer that produces protein aerosol and an electret filter that collects albumin and carcinoembryonic antigen-a typical biomarker in lung cancer development- from the atomizer is developed. A device using electret filter as the collecting medium is designed to collect human albumin from exhaled breath of 6 volunteers. Comparison of the collecting ability between the electret filter method and other 2 reported methods is finally performed based on the amounts of albumin collected from human exhaled breath. In conclusion, a decreasing collection efficiency ranging from 17.6% to 2.3% for atomized albumin aerosol and 42% to 12.5% for atomized carcinoembryonic antigen particles is found; moreover, an optimum volume of sampling human exhaled breath ranging from 100 L to 200 L is also observed; finally, the self-designed collecting device shows a significantly better performance in collecting albumin from human exhaled breath than the exhaled breath condensate method (p0.05. In summary, electret filters are potential in collecting non-volatile biomarkers in human exhaled breath not only because it was simpler, cheaper and easier to use than traditional methods but also for its better collecting performance.

  15. Rotational Kinematics Model Based Adaptive Particle Filter for Robust Human Tracking in Thermal Omnidirectional Vision

    Directory of Open Access Journals (Sweden)

    Yazhe Tang

    2015-01-01

    Full Text Available This paper presents a novel surveillance system named thermal omnidirectional vision (TOV system which can work in total darkness with a wild field of view. Different to the conventional thermal vision sensor, the proposed vision system exhibits serious nonlinear distortion due to the effect of the quadratic mirror. To effectively model the inherent distortion of omnidirectional vision, an equivalent sphere projection is employed to adaptively calculate parameterized distorted neighborhood of an object in the image plane. With the equivalent projection based adaptive neighborhood calculation, a distortion-invariant gradient coding feature is proposed for thermal catadioptric vision. For robust tracking purpose, a rotational kinematic modeled adaptive particle filter is proposed based on the characteristic of omnidirectional vision, which can handle multiple movements effectively, including the rapid motions. Finally, the experiments are given to verify the performance of the proposed algorithm for human tracking in TOV system.

  16. A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.

    Science.gov (United States)

    Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J

    2014-01-01

    Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.

  17. Sensory augmentation: integration of an auditory compass signal into human perception of space

    Science.gov (United States)

    Schumann, Frank; O’Regan, J. Kevin

    2017-01-01

    Bio-mimetic approaches to restoring sensory function show great promise in that they rapidly produce perceptual experience, but have the disadvantage of being invasive. In contrast, sensory substitution approaches are non-invasive, but may lead to cognitive rather than perceptual experience. Here we introduce a new non-invasive approach that leads to fast and truly perceptual experience like bio-mimetic techniques. Instead of building on existing circuits at the neural level as done in bio-mimetics, we piggy-back on sensorimotor contingencies at the stimulus level. We convey head orientation to geomagnetic North, a reliable spatial relation not normally sensed by humans, by mimicking sensorimotor contingencies of distal sounds via head-related transfer functions. We demonstrate rapid and long-lasting integration into the perception of self-rotation. Short training with amplified or reduced rotation gain in the magnetic signal can expand or compress the perceived extent of vestibular self-rotation, even with the magnetic signal absent in the test. We argue that it is the reliability of the magnetic signal that allows vestibular spatial recalibration, and the coding scheme mimicking sensorimotor contingencies of distal sounds that permits fast integration. Hence we propose that contingency-mimetic feedback has great potential for creating sensory augmentation devices that achieve fast and genuinely perceptual experiences. PMID:28195187

  18. The gap-startle paradigm to assess auditory temporal processing: Bridging animal and human research.

    Science.gov (United States)

    Fournier, Philippe; Hébert, Sylvie

    2016-05-01

    The gap-prepulse inhibition of the acoustic startle (GPIAS) paradigm is the primary test used in animal research to identify gap detection thresholds and impairment. When a silent gap is presented shortly before a loud startling stimulus, the startle reflex is inhibited and the extent of inhibition is assumed to reflect detection. Here, we applied the same paradigm in humans. One hundred and fifty-seven normal-hearing participants were tested using one of five gap durations (5, 25, 50, 100, 200 ms) in one of the following two paradigms-gap-embedded in or gap-following-the continuous background noise. The duration-inhibition relationship was observable for both conditions but followed different patterns. In the gap-embedded paradigm, GPIAS increased significantly with gap duration up to 50 ms and then more slowly up to 200 ms (trend only). In contrast, in the gap-following paradigm, significant inhibition-different from 0--was observable only at gap durations from 50 to 200 ms. The finding that different patterns are found depending on gap position within the background noise is compatible with distinct mechanisms underlying each of the two paradigms. © 2016 Society for Psychophysiological Research.

  19. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  1. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  2. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  3. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Human tracking in thermal images using adaptive particle filters with online random forest learning

    Science.gov (United States)

    Ko, Byoung Chul; Kwak, Joon-Young; Nam, Jae-Yeal

    2013-11-01

    This paper presents a fast and robust human tracking method to use in a moving long-wave infrared thermal camera under poor illumination with the existence of shadows and cluttered backgrounds. To improve the human tracking performance while minimizing the computation time, this study proposes an online learning of classifiers based on particle filters and combination of a local intensity distribution (LID) with oriented center-symmetric local binary patterns (OCS-LBP). Specifically, we design a real-time random forest (RF), which is the ensemble of decision trees for confidence estimation, and confidences of the RF are converted into a likelihood function of the target state. First, the target model is selected by the user and particles are sampled. Then, RFs are generated using the positive and negative examples with LID and OCS-LBP features by online learning. The learned RF classifiers are used to detect the most likely target position in the subsequent frame in the next stage. Then, the RFs are learned again by means of fast retraining with the tracked object and background appearance in the new frame. The proposed algorithm is successfully applied to various thermal videos as tests and its tracking performance is better than those of other methods.

  5. Persistence of human immunodeficiency virus type 1 subtype B DNA in dried-blood samples on FTA filter paper.

    Science.gov (United States)

    Li, Chung-Chen; Beck, Ingrid A; Seidel, Kristy D; Frenkel, Lisa M

    2004-08-01

    The stability of human immunodeficiency virus type 1 (HIV-1) DNA in whole blood collected on filter paper (FTA Card) was evaluated. After >4 years of storage at room temperature in the dark our qualitative assay detected virus at a rate similar to that of our initial test (58 of 60, 97%; P = 0.16), suggesting long-term HIV-1 DNA stability.

  6. Short wavelength light filtering by the natural human lens and IOLs -- implications for entrainment of circadian rhythm

    DEFF Research Database (Denmark)

    Brøndsted, Adam Elias; Lundeman, Jesper Holm; Kessel, Line

    2013-01-01

    Photoentrainment of circadian rhythm begins with the stimulation of melanopsin containing retinal ganglion cells that respond directly to blue light. With age, the human lens becomes a strong colour filter attenuating transmission of short wavelengths. The purpose of the study was to examine the ...

  7. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    Science.gov (United States)

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  8. Nicotine, auditory sensory memory and attention in a human ketamine model of schizophrenia: moderating influence of a hallucinatory trait

    Directory of Open Access Journals (Sweden)

    Verner eKnott

    2012-09-01

    Full Text Available Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low level auditory sensory processes and higher order attention-dependent operations. Objectives: As N-methyl-D-aspartate receptor (NMDAR hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: a to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention, as indexed by the auditory event-related brain potential (ERP – mismatch negativity (MMN, and b to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD. Methods: In a randomized, double-blind, placebo controlled design involving a low intravenous dose of ketamine (.04 mg/kg and a 4 mg dose of nicotine gum, MMN and performance on a rapid visual information processing (RVIP task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12 or higher (H-HD for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed (reaction time and accuracy (increased % hits and d΄ and reduced false alarms on the RIVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d΄, as well as reaction time were poorer in H-HD (vs. L-HD and while hit rate and d΄ was increased by nicotine in H-HD, reaction time was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairments and improved attention, particularly in individuals prone to HD.

  9. Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory.

    Science.gov (United States)

    Buchsbaum, Bradley R; Olsen, Rosanna K; Koch, Paul; Berman, Karen Faith

    2005-11-23

    To hear a sequence of words and repeat them requires sensory-motor processing and something more-temporary storage. We investigated neural mechanisms of verbal memory by using fMRI and a task designed to tease apart perceptually based ("echoic") memory from phonological-articulatory memory. Sets of two- or three-word pairs were presented bimodally, followed by a cue indicating from which modality (auditory or visual) items were to be retrieved and rehearsed over a delay. Although delay-period activation in the planum temporale (PT) was insensible to the source modality and showed sustained delay-period activity, the superior temporal gyrus (STG) activated more vigorously when the retrieved items had arrived to the auditory modality and showed transient delay-period activity. Functional connectivity analysis revealed two topographically distinct fronto-temporal circuits, with STG co-activating more strongly with ventrolateral prefrontal cortex and PT co-activating more strongly with dorsolateral prefrontal cortex. These argue for separate contributions of ventral and dorsal auditory streams in verbal working memory.

  10. Differences between human auditory event-related potentials (AERPs) measured at 2 and 4 months after birth.

    Science.gov (United States)

    van den Heuvel, Marion I; Otte, Renée A; Braeken, Marijke A K A; Winkler, István; Kushnerenko, Elena; Van den Bergh, Bea R H

    2015-07-01

    Infant auditory event-related potentials (AERPs) show a series of marked changes during the first year of life. These AERP changes indicate important advances in early development. The current study examined AERP differences between 2- and 4-month-old infants. An auditory oddball paradigm was delivered to infants with a frequent repetitive tone and three rare auditory events. The three rare events included a shorter than the regular inter-stimulus interval (ISI-deviant), white noise segments, and environmental sounds. The results suggest that the N250 infantile AERP component emerges during this period in response to white noise but not to environmental sounds, possibly indicating a developmental step towards separating acoustic deviance from contextual novelty. The scalp distribution of the AERP response to both the white noise and the environmental sounds shifted towards frontal areas and AERP peak latencies were overall lower in infants at 4 than at 2 months of age. These observations indicate improvements in the speed of sound processing and maturation of the frontal attentional network in infants during this period. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Age-Associated Reduction of Asymmetry in Human Central Auditory Function: A 1H-Magnetic Resonance Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Xianming Chen

    2013-01-01

    Full Text Available The aim of this study was to investigate the effects of age on hemispheric asymmetry in the auditory cortex after pure tone stimulation. Ten young and 8 older healthy volunteers took part in this study. Two-dimensional multivoxel 1H-magnetic resonance spectroscopy scans were performed before and after stimulation. The ratios of N-acetylaspartate (NAA, glutamate/glutamine (Glx, and γ-amino butyric acid (GABA to creatine (Cr were determined and compared between the two groups. The distribution of metabolites between the left and right auditory cortex was also determined. Before stimulation, left and right side NAA/Cr and right side GABA/Cr were significantly lower, whereas right side Glx/Cr was significantly higher in the older group compared with the young group. After stimulation, left and right side NAA/Cr and GABA/Cr were significantly lower, whereas left side Glx/Cr was significantly higher in the older group compared with the young group. There was obvious asymmetry in right side Glx/Cr and left side GABA/Cr after stimulation in young group, but not in older group. In summary, there is marked hemispheric asymmetry in auditory cortical metabolites following pure tone stimulation in young, but not older adults. This reduced asymmetry in older adults may at least in part underlie the speech perception difficulties/presbycusis experienced by aging adults.

  12. Human-Avatar Symbiosis for the Treatment of Auditory Verbal Hallucinations in Schizophrenia through Virtual/Augmented Reality and Brain-Computer Interfaces.

    Science.gov (United States)

    Fernández-Caballero, Antonio; Navarro, Elena; Fernández-Sotos, Patricia; González, Pascual; Ricarte, Jorge J; Latorre, José M; Rodriguez-Jimenez, Roberto

    2017-01-01

    This perspective paper faces the future of alternative treatments that take advantage of a social and cognitive approach with regards to pharmacological therapy of auditory verbal hallucinations (AVH) in patients with schizophrenia. AVH are the perception of voices in the absence of auditory stimulation and represents a severe mental health symptom. Virtual/augmented reality (VR/AR) and brain computer interfaces (BCI) are technologies that are growing more and more in different medical and psychological applications. Our position is that their combined use in computer-based therapies offers still unforeseen possibilities for the treatment of physical and mental disabilities. This is why, the paper expects that researchers and clinicians undergo a pathway toward human-avatar symbiosis for AVH by taking full advantage of new technologies. This outlook supposes to address challenging issues in the understanding of non-pharmacological treatment of schizophrenia-related disorders and the exploitation of VR/AR and BCI to achieve a real human-avatar symbiosis.

  13. The impact of air pollution from used ventilation filters on human comfort and health

    DEFF Research Database (Denmark)

    Clausen, Geo; Alm, O.; Fanger, Povl Ole

    2002-01-01

    The comfort and health of 30 women was studied during 4 hours´ exposure in an experimental room with either a used or a new filter present in the ventilation system. All other environmental parameters were kept constant. The presence of the used filter in the ventilation system had a significant ...

  14. Encapsulation of the UV filters ethylhexyl methoxycinnamate and butyl methoxydibenzoylmethane in lipid microparticles: effect on in vivo human skin permeation.

    Science.gov (United States)

    Scalia, S; Mezzena, M; Ramaccini, D

    2011-01-01

    Lipid microparticles loaded with the UVB filter ethylhexyl methoxycinnamate (EHMC) and the UVA filter butyl methoxydibenzoylmethane (BMDBM) were evaluated for their effect on the sunscreen agent's percutaneous penetration. Microparticles loaded with EHMC or BMDBM were prepared by the melt emulsification technique using stearic acid or glyceryl behenate as lipidic material, respectively, and hydrogenate phosphatidylcholine as the surfactant. Nonencapsulated BMDBM and EHMC in conjunction with blank microparticles or equivalent amounts of the 2 UV filters loaded in the lipid microparticles were introduced into oil-in-water emulsions and applied to human volunteers. Skin penetration was investigated in vivo by the tape-stripping technique. For the cream with the nonencapsulated sunscreen agents, the percentages of the applied dose diffused into the stratum corneum were 32.4 ± 4.1% and 30.3 ± 3.3% for EHMC and BMDBM, respectively. A statistically significant reduction in the in vivo skin penetration to 25.3 ± 5.5% for EHMC and 22.7 ± 5.4% for BMDBM was achieved by the cream containing the microencapsulated UV filters. The inhibiting effect on permeation attained by the lipid microparticles was more marked (45-56.3% reduction) in the deeper stratum corneum layers. The reduced percutaneous penetration of BMDBM and EHMC achieved by the lipid microparticles should preserve the UV filter efficacy and limit potential toxicological risks. Copyright © 2011 S. Karger AG, Basel.

  15. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    Science.gov (United States)

    Olivares, A.; Górriz, J. M.; Ramírez, J.; Olivares, G.

    2011-02-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed.

  16. Accurate human limb angle measurement: sensor fusion through Kalman, least mean squares and recursive least-squares adaptive filtering

    International Nuclear Information System (INIS)

    Olivares, A; Olivares, G; Górriz, J M; Ramírez, J

    2011-01-01

    Inertial sensors are widely used in human body motion monitoring systems since they permit us to determine the position of the subject's limbs. Limb angle measurement is carried out through the integration of the angular velocity measured by a rate sensor and the decomposition of the components of static gravity acceleration measured by an accelerometer. Different factors derived from the sensors' nature, such as the angle random walk and dynamic bias, lead to erroneous measurements. Dynamic bias effects can be reduced through the use of adaptive filtering based on sensor fusion concepts. Most existing published works use a Kalman filtering sensor fusion approach. Our aim is to perform a comparative study among different adaptive filters. Several least mean squares (LMS), recursive least squares (RLS) and Kalman filtering variations are tested for the purpose of finding the best method leading to a more accurate and robust limb angle measurement. A new angle wander compensation sensor fusion approach based on LMS and RLS filters has been developed

  17. Towards label-free evaluation of oxidative stress in human skin exposed to sun filters (Conference Presentation)

    Science.gov (United States)

    Osseiran, Sam; Wang, Hequn; Suita, Yusuke; Roider, Elisabeth; Fisher, David E.; Evans, Conor L.

    2016-02-01

    Skin cancer, including basal cell carcinoma, squamous cell carcinoma, and melanoma, is the most common form of cancer in North America. Paradoxically, skin cancer incidence is steadily on the rise even despite the growing use of sunscreens over the past decades. One potential explanation for this discrepancy involves the sun filters in sunscreen, which are responsible for blocking harmful ultraviolet radiation. It is proposed that these agents may produce reactive oxygen species (ROS) at the site of application, thereby generating oxidative stress in skin that gives rise to genetic mutations, which may explain the rising incidence of skin cancer. To test this hypothesis, ex vivo human skin was treated with five common chemical sun filters (avobenzone, octocrylene, homosalate, octisalate, and oxybenzone) as well as two physical sun filters (zinc oxide compounds), both with and without UV irradiation. To non-invasively evaluate oxidative stress, two-photon excitation fluorescence (2PEF) and fluorescence lifetime imaging microscopy (FLIM) of the skin samples were used to monitor levels of NADH and FAD, two key cofactors in cellular redox metabolism. The relative redox state of the skin was assessed based on the fluorescence intensities and lifetimes of these endogenous cofactors. While the sun filters were indeed shown to have a protective effect from UV radiation, it was observed that they also generate oxidative stress in skin, even in the absence of UV light. These results suggest that sun filter induced ROS production requires more careful study, especially in how these reactive species impact the rise of skin cancer.

  18. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    Science.gov (United States)

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  19. A Time-Frequency Auditory Model Using Wavelet Packets

    DEFF Research Database (Denmark)

    Agerkvist, Finn

    1996-01-01

    A time-frequency auditory model is presented. The model uses the wavelet packet analysis as the preprocessor. The auditory filters are modelled by the rounded exponential filters, and the excitation is smoothed by a window function. By comparing time-frequency excitation patterns it is shown...... that the change in the time-frequency excitation pattern introduced when a test tone at masked threshold is added to the masker is approximately equal to 7 dB for all types of maskers. The classic detection ratio therefore overrates the detection efficiency of the auditory system....

  20. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  1. External auditory exostoses in the Xuchang and Xujiayao human remains: Patterns and implications among eastern Eurasian Middle and Late Pleistocene crania.

    Science.gov (United States)

    Trinkaus, Erik; Wu, Xiu-Jie

    2017-01-01

    In the context of Middle and Late Pleistocene eastern Eurasian human crania, the external auditory exostoses (EAE) of the late archaic Xuchang 1 and 2 and the Xujiayao 15 early Late Pleistocene human temporal bones are described. Xujiayao 15 has small EAE (Grade 1), Xuchang 1 presents bilateral medium EAE (Grade 2), and Xuchang 2 exhibits bilaterally large EAE (Grade 3), especially on the right side. These cranial remains join the other eastern Eurasian later Pleistocene humans in providing frequencies of 61% (N = 18) and 58% (N = 12) respectively for archaic and early modern human samples. These values are near the upper limits of recent human frequencies, and they imply frequent aquatic exposure among these Pleistocene humans. In addition, the medial extents of the Xuchang 1 and 2 EAE would have impinged on their tympanic membranes, and the large EAE of Xuchang 2 would have resulted in cerumen impaction. Both effects would have produced conductive hearing loss, a serious impairment in a Pleistocene foraging context.

  2. Neural Correlates of Auditory Perceptual Awareness and Release from Informational Masking Recorded Directly from Human Cortex: A Case Study

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    2016-10-01

    Full Text Available In complex acoustic environments, even salient supra-threshold sounds sometimes go unperceived, a phenomenon known as informational masking. The neural basis of informational masking (and its release has not been well characterized, particularly outside auditory cortex. We combined electrocorticography in a neurosurgical patient undergoing invasive epilepsy monitoring with trial-by-trial perceptual reports of isochronous target-tone streams embedded in random multi-tone maskers. Awareness of such masker-embedded target streams was associated with a focal negativity between 100 and 200 ms and high-gamma activity between 50 and 250 ms (both in auditory cortex on the posterolateral superior temporal gyrus as well as a broad P3b-like potential (between ~300 and 600 ms with generators in ventrolateral frontal and lateral temporal cortex. Unperceived target tones elicited drastically reduced versions of such responses, if at all. While it remains unclear whether these responses reflect conscious perception, itself, as opposed to pre- or post-perceptual processing, the results suggest that conscious perception of target sounds in complex listening environments may engage diverse neural mechanisms in distributed brain areas.

  3. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high......When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...

  4. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  5. Blue-light filtering alters angiogenic signaling in human retinal pigmented epithelial cells culture model.

    Science.gov (United States)

    Vila, Natalia; Siblini, Aya; Esposito, Evangelina; Bravo-Filho, Vasco; Zoroquiain, Pablo; Aldrees, Sultan; Logan, Patrick; Arias, Lluis; Burnier, Miguel N

    2017-11-02

    Light exposure and more specifically the spectrum of blue light contribute to the oxidative stress in Age-related macular degeneration (AMD). The purpose of the study was to establish whether blue light filtering could modify proangiogenic signaling produced by retinal pigmented epithelial (RPE) cells under different conditions simulating risk factors for AMD. Three experiments were carried out in order to expose ARPE-19 cells to white light for 48 h with and without blue light-blocking filters (BLF) in different conditions. In each experiment one group was exposed to light with no BLF protection, a second group was exposed to light with BLF protection, and a control group was not exposed to light. The ARPE-19 cells used in each experiment prior to light exposure were cultured for 24 h as follows: Experiment 1) Normoxia, Experiment 2) Hypoxia, and Experiment 3) Lutein supplemented media in normoxia. The media of all groups was harvested after light exposure for sandwich ELISA-based assays to quantify 10 pro-angiogenic cytokines. A significant decrease in angiogenin secretion levels and a significant increase in bFGF were observed following light exposure, compared to dark conditions, in both normoxia and hypoxia conditions. With the addition of a blue light-blocking filter in normoxia, a significant increase in angiogenin levels was observed. Although statistical significance was not achieved, blue light filters reduce light-induced secretion of bFGF and VEGF to near normal levels. This trend is also observed when ARPE-19 cells are grown under hypoxic conditions and when pre-treated with lutein prior to exposure to experimental conditions. Following light exposure, there is a decrease in angiogenin secretion by ARPE-19 cells, which was abrogated with a blue light - blocking filter. Our findings support the position that blue light filtering affects the secretion of angiogenic factors by retinal pigmented epithelial cells under normoxic, hypoxic, and lutein

  6. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  7. The Nano-filters as the tools for the management of the water imbalance in the human society

    Science.gov (United States)

    Singh, R. P.; Kontar, V.

    2011-12-01

    ultra-thin nanoscale fibers, which filter out contaminants, plus active carbon granules, which kill bacteria. The carbon nano-tube as filters exhibit chemical-species selectivity with higher physical strength and higher temperature tolerance, more rugged process, more rapid filtration, regeneration via thermal means rather than physical removal and lowers costs. The nano-filters remove the toxic or unwanted bivalent ions (ions with 2 or more charges), such as lead, iron, nickel, mercury, etc. The nano-materials and nano-filters will help solve the problems of the water imbalance management in the human society. Therefore we are talking about some nano-applications on the session H138 "Imbalance of Water in Nature".

  8. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  9. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  10. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  11. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  12. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  13. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    International Nuclear Information System (INIS)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Dehaene, S.; Pallier, C.; Devauchelle, A.D.; Pallier, C.; Oppenheim, C.; Rizzi, L.; Dehaene, S.

    2009-01-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  14. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  15. Effect of noise and filtering on largest Lyapunov exponent of time series associated with human walking.

    Science.gov (United States)

    Mehdizadeh, Sina; Sanjari, Mohammad Ali

    2017-11-07

    This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Binding of (/sup 3/H)imipramine to human platelet membranes with compensation for saturable binding to filters and its implication for binding studies with brain membranes

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, O.M.; Wood, K.M.; Williams, D.C.

    1984-08-01

    Apparent specific binding of (/sup 3/H)imipramine to human platelet membranes at high concentrations of imipramine showed deviation from that expected of a single binding site, a result consistent with a low-affinity binding site. The deviation was due to displaceable, saturable binding to the glass fibre filters used in the assays. Imipramine, chloripramine, desipramine, and fluoxetine inhibited binding to filters whereas 5-hydroxytryptamine and ethanol were ineffective. Experimental conditions were developed that eliminated filter binding, allowing assay of high- and low-affinity binding to membranes. Failure to correct for filter binding may lead to overestimation of binding parameters, Bmax and KD for high-affinity binding to membranes, and may also be misinterpreted as indicating a low-affinity binding component in both platelet and brain membranes. Low-affinity binding (KD less than 2 microM) of imipramine to human platelet membranes was demonstrated and its significance discussed.

  17. Detection of anomaly in human retina using Laplacian Eigenmaps and vectorized matched filtering

    Science.gov (United States)

    Yacoubou Djima, Karamatou A.; Simonelli, Lucia D.; Cunningham, Denise; Czaja, Wojciech

    2015-03-01

    We present a novel method for automated anomaly detection on auto fluorescent data provided by the National Institute of Health (NIH). This is motivated by the need for new tools to improve the capability of diagnosing macular degeneration in its early stages, track the progression over time, and test the effectiveness of new treatment methods. In previous work, macular anomalies have been detected automatically through multiscale analysis procedures such as wavelet analysis or dimensionality reduction algorithms followed by a classification algorithm, e.g., Support Vector Machine. The method that we propose is a Vectorized Matched Filtering (VMF) algorithm combined with Laplacian Eigenmaps (LE), a nonlinear dimensionality reduction algorithm with locality preserving properties. By applying LE, we are able to represent the data in the form of eigenimages, some of which accentuate the visibility of anomalies. We pick significant eigenimages and proceed with the VMF algorithm that classifies anomalies across all of these eigenimages simultaneously. To evaluate our performance, we compare our method to two other schemes: a matched filtering algorithm based on anomaly detection on single images and a combination of PCA and VMF. LE combined with VMF algorithm performs best, yielding a high rate of accurate anomaly detection. This shows the advantage of using a nonlinear approach to represent the data and the effectiveness of VMF, which operates on the images as a data cube rather than individual images.

  18. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  19. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  20. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  1. Retina-Inspired Filter.

    Science.gov (United States)

    Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien

    2018-07-01

    This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.

  2. Auditory function in the Tc1 mouse model of down syndrome suggests a limited region of human chromosome 21 involved in otitis media.

    Directory of Open Access Journals (Sweden)

    Stephanie Kuhn

    Full Text Available Down syndrome is one of the most common congenital disorders leading to a wide range of health problems in humans, including frequent otitis media. The Tc1 mouse carries a significant part of human chromosome 21 (Hsa21 in addition to the full set of mouse chromosomes and shares many phenotypes observed in humans affected by Down syndrome with trisomy of chromosome 21. However, it is unknown whether Tc1 mice exhibit a hearing phenotype and might thus represent a good model for understanding the hearing loss that is common in Down syndrome. In this study we carried out a structural and functional assessment of hearing in Tc1 mice. Auditory brainstem response (ABR measurements in Tc1 mice showed normal thresholds compared to littermate controls and ABR waveform latencies and amplitudes were equivalent to controls. The gross anatomy of the middle and inner ears was also similar between Tc1 and control mice. The physiological properties of cochlear sensory receptors (inner and outer hair cells: IHCs and OHCs were investigated using single-cell patch clamp recordings from the acutely dissected cochleae. Adult Tc1 IHCs exhibited normal resting membrane potentials and expressed all K(+ currents characteristic of control hair cells. However, the size of the large conductance (BK Ca(2+ activated K(+ current (I(K,f, which enables rapid voltage responses essential for accurate sound encoding, was increased in Tc1 IHCs. All physiological properties investigated in OHCs were indistinguishable between the two genotypes. The normal functional hearing and the gross structural anatomy of the middle and inner ears in the Tc1 mouse contrast to that observed in the Ts65Dn model of Down syndrome which shows otitis media. Genes that are trisomic in Ts65Dn but disomic in Tc1 may predispose to otitis media when an additional copy is active.

  3. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  5. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Nielsen, H M; Rassing, M R; Nielsen, Hanne Mørck

    2000-01-01

    The objective of the present study was to evaluate the TR146 cell culture model as an in vitro model of human buccal epithelium. For this purpose, the permeability of water, mannitol and testosterone across the TR146 cell culture model was compared to the permeability across human, monkey...

  6. Loud Music Exposure and Cochlear Synaptopathy in Young Adults: Isolated Auditory Brainstem Response Effects but No Perceptual Consequences.

    Science.gov (United States)

    Grose, John H; Buss, Emily; Hall, Joseph W

    2017-01-01

    The purpose of this study was to test the hypothesis that listeners with frequent exposure to loud music exhibit deficits in suprathreshold auditory performance consistent with cochlear synaptopathy. Young adults with normal audiograms were recruited who either did ( n = 31) or did not ( n = 30) have a history of frequent attendance at loud music venues where the typical sound levels could be expected to result in temporary threshold shifts. A test battery was administered that comprised three sets of procedures: (a) electrophysiological tests including distortion product otoacoustic emissions, auditory brainstem responses, envelope following responses, and the acoustic change complex evoked by an interaural phase inversion; (b) psychoacoustic tests including temporal modulation detection, spectral modulation detection, and sensitivity to interaural phase; and (c) speech tests including filtered phoneme recognition and speech-in-noise recognition. The results demonstrated that a history of loud music exposure can lead to a profile of peripheral auditory function that is consistent with an interpretation of cochlear synaptopathy in humans, namely, modestly abnormal auditory brainstem response Wave I/Wave V ratios in the presence of normal distortion product otoacoustic emissions and normal audiometric thresholds. However, there were no other electrophysiological, psychophysical, or speech perception effects. The absence of any behavioral effects in suprathreshold sound processing indicated that, even if cochlear synaptopathy is a valid pathophysiological condition in humans, its perceptual sequelae are either too diffuse or too inconsequential to permit a simple differential diagnosis of hidden hearing loss.

  7. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  9. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  10. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  11. Musical noise reduction using an adaptive filter

    Science.gov (United States)

    Hanada, Takeshi; Murakami, Takahiro; Ishida, Yoshihisa; Hoya, Tetsuya

    2003-10-01

    This paper presents a method for reducing a particular noise (musical noise). The musical noise is artificially produced by Spectral Subtraction (SS), which is one of the most conventional methods for speech enhancement. The musical noise is the tin-like sound and annoying in human auditory. We know that the duration of the musical noise is considerably short in comparison with that of speech, and that the frequency components of the musical noise are random and isolated. In the ordinary SS-based methods, the musical noise is removed by the post-processing. However, the output of the ordinary post-processing is delayed since the post-processing uses the succeeding frames. In order to improve this problem, we propose a novel method using an adaptive filter. In the proposed system, the observed noisy signal is used as the input signal to the adaptive filter and the output of SS is used as the reference signal. In this paper we exploit the normalized LMS (Least Mean Square) algorithm for the adaptive filter. Simulation results show that the proposed method has improved the intelligibility of the enhanced speech in comparison with the conventional method.

  12. The fast detection of rare auditory feature conjunctions in the human brain as revealed by cortical gamma-band electroencephalogram.

    Science.gov (United States)

    Ruusuvirta, T; Huotilainen, M

    2005-01-01

    Natural environments typically contain temporal scatters of sounds emitted from multiple sources. The sounds may often physically stand out from one another in their conjoined rather than simple features. This poses a particular challenge for the brain to detect which of these sounds are rare and, therefore, potentially important for survival. We recorded gamma-band (32-40 Hz) electroencephalographic (EEG) oscillations from the scalp of adult humans who passively listened to a repeated tone carrying frequent and rare conjunctions of its frequency and intensity. EEG oscillations that this tone induced, rather than evoked, differed in amplitude between the two conjunction types within the 56-ms analysis window from tone onset. Our finding suggests that, perhaps with the support of its non-phase-locked synchrony in the gamma band, the human brain is able to detect rare sounds as feature conjunctions very rapidly.

  13. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Mørck Nielsen, H; Rømer Rassing, M; Nielsen, Hanne Mørck

    2000-01-01

    cell culture model, and human and porcine buccal epithelium were compared. The esterase activity in the intact cell culture model and in the porcine buccal mucosa was compared. Further, the TR146 cell culture model was used to study the permeability rate and metabolism of leu-enkephalin. The activity...... of the three enzymes in the TR146 homogenate supernatants was in the same range as the activity in homogenate supernatants of human buccal epithelium. In the TR146 cell culture model, the activity of aminopeptidase (13.70+/-2.10 nmol/min per mg protein) was approx. four times the activity of carboxypeptidase...

  14. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Nielsen, Hanne Mørck; Verhoef, J C; Ponec, M

    1999-01-01

    The aim of the present study was to characterize the TR146 cell culture model as an in vitro model of human buccal epithelium with respect to the permeability of test substances with different molecular weights (M(w)). For this purpose, the apparent permeability (P(app)) values for mannitol...... and for fluorescein isothiocyanate (FITC)-labelled dextrans (FD) with various M(w) (4000-40000) were compared to the P(app) values obtained using porcine buccal mucosa as an in vitro model of the human buccal epithelium. The effect of 10 mM sodium glycocholate (GC) on the P(app) values was examined. To identify...

  15. Auditory Modeling as a Basis for Spectral Modulation Analysis with Application to Speaker Recognition

    National Research Council Canada - National Science Library

    Wang, Tianyu T; Quatieri, Thomas F

    2007-01-01

    ...) variations in analysis filter-bank size, and (3) nonlinear adaptation. Our methods are motivated both by a desire to better mimic auditory processing relative to traditional front-ends (e.g., the mel-cepstrum...

  16. Modification of computational auditory scene analysis (CASA) for noise-robust acoustic feature

    Science.gov (United States)

    Kwon, Minseok

    While there have been many attempts to mitigate interferences of background noise, the performance of automatic speech recognition (ASR) still can be deteriorated by various factors with ease. However, normal hearing listeners can accurately perceive sounds of their interests, which is believed to be a result of Auditory Scene Analysis (ASA). As a first attempt, the simulation of the human auditory processing, called computational auditory scene analysis (CASA), was fulfilled through physiological and psychological investigations of ASA. CASA comprised of Zilany-Bruce auditory model, followed by tracking fundamental frequency for voice segmentation and detecting pairs of onset/offset at each characteristic frequency (CF) for unvoiced segmentation. The resulting Time-Frequency (T-F) representation of acoustic stimulation was converted into acoustic feature, gammachirp-tone frequency cepstral coefficients (GFCC). 11 keywords with various environmental conditions are used and the robustness of GFCC was evaluated by spectral distance (SD) and dynamic time warping distance (DTW). In "clean" and "noisy" conditions, the application of CASA generally improved noise robustness of the acoustic feature compared to a conventional method with or without noise suppression using MMSE estimator. The intial study, however, not only showed the noise-type dependency at low SNR, but also called the evaluation methods in question. Some modifications were made to capture better spectral continuity from an acoustic feature matrix, to obtain faster processing speed, and to describe the human auditory system more precisely. The proposed framework includes: 1) multi-scale integration to capture more accurate continuity in feature extraction, 2) contrast enhancement (CE) of each CF by competition with neighboring frequency bands, and 3) auditory model modifications. The model modifications contain the introduction of higher Q factor, middle ear filter more analogous to human auditory system

  17. Auditory ERB like admissible wavelet packet features for TIMIT phoneme recognition

    Directory of Open Access Journals (Sweden)

    P.K. Sahu

    2014-09-01

    Full Text Available In recent years wavelet transform has been found to be an effective tool for time–frequency analysis. Wavelet transform has been used as feature extraction in speech recognition applications and it has proved to be an effective technique for unvoiced phoneme classification. In this paper a new filter structure using admissible wavelet packet is analyzed for English phoneme recognition. These filters have the benefit of having frequency bands spacing similar to the auditory Equivalent Rectangular Bandwidth (ERB scale. Central frequencies of ERB scale are equally distributed along the frequency response of human cochlea. A new sets of features are derived using wavelet packet transform's multi-resolution capabilities and found to be better than conventional features for unvoiced phoneme problems. Some of the noises from NOISEX-92 database has been used for preparing the artificial noisy database to test the robustness of wavelet based features.

  18. Grizzly bears as a filter for human use management in Canadian Rocky Mountain national parks

    Science.gov (United States)

    Derek Petersen

    2000-01-01

    Canadian National Parks within the Rocky Mountains recognize that human use must be managed if the integrity and health of the ecosystems are to be preserved. Parks Canada is being challenged to ensure that these management actions are based on credible scientific principles and understanding. Grizzly bears provide one of only a few ecological tools that can be used to...

  19. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Nielsen, Hanne Mørck; Rassing, M R

    1999-01-01

    The aim of the present study was to evaluate the TR146 cell culture model as an in vitro model of human buccal epithelium with respect to the permeability enhancement by different pH values, different osmolality values or bile salts. For this purpose, the increase in the apparent permeability (P...

  20. Do event-related potentials reveal the mechanism of the auditory sensory memory in the human brain?

    Science.gov (United States)

    Näätänen, R; Paavilainen, P; Alho, K; Reinikainen, K; Sams, M

    1989-03-27

    Event-related brain potentials (ERP) to task-irrelevant tone pips presented at short intervals were recorded from the scalp of normal human subjects. Infrequent decrements in stimulus intensity elicited the mismatch negativity (MMN) which was larger in amplitude and shorter in latency the softer the deviant stimulus was. The results obtained imply memory representations which develop automatically and accurately represent the physical features of the repetitive stimulus. These memory traces appear to be those of the acoustic sensory memory, the 'echoic' memory. When an input does not match with such a trace the MMN is generated.

  1. Filter arrays

    Science.gov (United States)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  2. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  3. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  4. A new technique to characterize CT scanner bow-tie filter attenuation and applications in human cadaver dosimetry simulations

    Science.gov (United States)

    Li, Xinhua; Shi, Jim Q.; Zhang, Da; Singh, Sarabjeet; Padole, Atul; Otrakji, Alexi; Kalra, Mannudeep K.; Xu, X. George; Liu, Bob

    2015-01-01

    Purpose: To present a noninvasive technique for directly measuring the CT bow-tie filter attenuation with a linear array x-ray detector. Methods: A scintillator based x-ray detector of 384 pixels, 307 mm active length, and fast data acquisition (model X-Scan 0.8c4-307, Detection Technology, FI-91100 Ii, Finland) was used to simultaneously detect radiation levels across a scan field-of-view. The sampling time was as short as 0.24 ms. To measure the body bow-tie attenuation on a GE Lightspeed Pro 16 CT scanner, the x-ray tube was parked at the 12 o’clock position, and the detector was centered in the scan field at the isocenter height. Two radiation exposures were made with and without the bow-tie in the beam path. Each readout signal was corrected for the detector background offset and signal-level related nonlinear gain, and the ratio of the two exposures gave the bow-tie attenuation. The results were used in the geant4 based simulations of the point doses measured using six thimble chambers placed in a human cadaver with abdomen/pelvis CT scans at 100 or 120 kV, helical pitch at 1.375, constant or variable tube current, and distinct x-ray tube starting angles. Results: Absolute attenuation was measured with the body bow-tie scanned at 80–140 kV. For 24 doses measured in six organs of the cadaver, the median or maximum difference between the simulation results and the measurements on the CT scanner was 8.9% or 25.9%, respectively. Conclusions: The described method allows fast and accurate bow-tie filter characterization. PMID:26520720

  5. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  6. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  7. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  8. Speech enhancement via Mel-scale Wiener filtering with a frequency-wise voice activity detector

    International Nuclear Information System (INIS)

    Kim, Han Jun; Kim, Hwa Soo; Cho, Young Man

    2007-01-01

    This paper presents a speech enhancement system that enables a comfortable communication inside an automobile. A couple of novel concepts are proposed in an effort to improve two major building blocks in the existing speech enhancement systems: a voice activity detector (VAD) and a noise filtering algorithm. The proposed VAD classifies a given data frame as speech or noise at each frequency, enabling the frequency-wise updates of noise statistics and thereby improving the effectiveness of the noise filtering algorithms by providing more up-to-date noise statistics. The celebrated Wiener filter is adopted in this paper as the accompanying noise filtering algorithm, which results in significant noise suppression. Yet, the musical noise present in most Wiener filter-based systems prompts the idea of applying the Wiener filter in the Mel-scale in which the human auditory system responds to the external stimulation. It turns out that the Mel-scale Wiener filter creates some masking effects and thereby reduces musical noise significantly, leading to smooth transition between data frames

  9. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  10. Filtering the reality: functional dissociation of lateral and medial pain systems during sleep in humans.

    Science.gov (United States)

    Bastuji, Hélène; Mazza, Stéphanie; Perchet, Caroline; Frot, Maud; Mauguière, François; Magnin, Michel; Garcia-Larrea, Luis

    2012-11-01

    Behavioral reactions to sensory stimuli during sleep are scarce despite preservation of sizeable cortical responses. To further understand such dissociation, we recorded intracortical field potentials to painful laser pulses in humans during waking and all-night sleep. Recordings were obtained from the three cortical structures receiving 95% of the spinothalamic cortical input in primates, namely the parietal operculum, posterior insula, and mid-anterior cingulate cortex. The dynamics of responses during sleep differed among cortical sites. In sleep Stage 2, evoked potential amplitudes were similarly attenuated relative to waking in all three cortical regions. During paradoxical, or rapid eye movements (REM), sleep, opercular and insular potentials remained stable in comparison with Stage 2, whereas the responses from mid-anterior cingulate abated drastically, and decreasing below background noise in half of the subjects. Thus, while the lateral operculo-insular system subserving sensory analysis of somatic stimuli remained active during paradoxical-REM sleep, mid-anterior cingulate processes related to orienting and avoidance behavior were suppressed. Dissociation between sensory and orienting-motor networks might explain why nociceptive stimuli can be either neglected or incorporated into dreams without awakening the subject. Copyright © 2011 Wiley Periodicals, Inc.

  11. Brian hears: online auditory processing using vectorization over channels.

    Science.gov (United States)

    Fontaine, Bertrand; Goodman, Dan F M; Benichoux, Victor; Brette, Romain

    2011-01-01

    The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

  12. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  13. Rectifier Filters

    Directory of Open Access Journals (Sweden)

    Y. A. Bladyko

    2010-01-01

    Full Text Available The paper contains definition of a smoothing factor which is suitable for any rectifier filter. The formulae of complex smoothing factors have been developed for simple and complex passive filters. The paper shows conditions for application of calculation formulae and filters

  14. The combined effects of forward masking by noise and high click rate on monaural and binaural human auditory nerve and brainstem potentials.

    Science.gov (United States)

    Pratt, Hillel; Polyakov, Andrey; Bleich, Naomi; Mittelman, Naomi

    2004-07-01

    To study effects of forward masking and rapid stimulation on human monaurally- and binaurally-evoked brainstem potentials and suggest their relation to synaptic fatigue and recovery and to neuronal action potential refractoriness. Auditory brainstem evoked potentials (ABEPs) were recorded from 12 normally- and symmetrically hearing adults, in response to each click (50 dB nHL, condensation and rarefaction) in a train of nine, with an inter-click interval of 11 ms, that followed a white noise burst of 100 ms duration (50 dB nHL). Sequences of white noise and click train were repeated at a rate of 2.89 s(-1). The interval between noise and first click in the train was 2, 11, 22, 44, 66 or 88 ms in different runs. ABEPs were averaged (8000 repetitions) using a dwell time of 25 micros/address/channel. The binaural interaction components (BICs) of ABEPs were derived and the single, centrally located equivalent dipoles of ABEP waves I and V and of the BIC major wave were estimated. The latencies of dipoles I and V of ABEP, their inter-dipole interval and the dipole magnitude of component V were significantly affected by the interval between noise and clicks and by the serial position of the click in the train. The latency and dipole magnitude of the major BIC component were significantly affected by the interval between noise and clicks. Interval from noise and the click's serial position in the train interacted to affect dipole V latency, dipole V magnitude, BIC latencies and the V-I inter-dipole latency difference. Most of the effects were fully apparent by the first few clicks in the train, and the trend (increase or decrease) was affected by the interval between noise and clicks. The changes in latency and magnitude of ABEP and BIC components with advancing position in the click train and the interactions of click position in the train with the intervals from noise indicate an interaction of fatigue and recovery, compatible with synaptic depletion and replenishing

  15. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  16. Noise-invariant Neurons in the Avian Auditory Cortex: Hearing the Song in Noise

    Science.gov (United States)

    Moore, R. Channing; Lee, Tyler; Theunissen, Frédéric E.

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex. PMID:23505354

  17. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Science.gov (United States)

    Moore, R Channing; Lee, Tyler; Theunissen, Frédéric E

    2013-01-01

    Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  18. Noise-invariant neurons in the avian auditory cortex: hearing the song in noise.

    Directory of Open Access Journals (Sweden)

    R Channing Moore

    Full Text Available Given the extraordinary ability of humans and animals to recognize communication signals over a background of noise, describing noise invariant neural responses is critical not only to pinpoint the brain regions that are mediating our robust perceptions but also to understand the neural computations that are performing these tasks and the underlying circuitry. Although invariant neural responses, such as rotation-invariant face cells, are well described in the visual system, high-level auditory neurons that can represent the same behaviorally relevant signal in a range of listening conditions have yet to be discovered. Here we found neurons in a secondary area of the avian auditory cortex that exhibit noise-invariant responses in the sense that they responded with similar spike patterns to song stimuli presented in silence and over a background of naturalistic noise. By characterizing the neurons' tuning in terms of their responses to modulations in the temporal and spectral envelope of the sound, we then show that noise invariance is partly achieved by selectively responding to long sounds with sharp spectral structure. Finally, to demonstrate that such computations could explain noise invariance, we designed a biologically inspired noise-filtering algorithm that can be used to separate song or speech from noise. This novel noise-filtering method performs as well as other state-of-the-art de-noising algorithms and could be used in clinical or consumer oriented applications. Our biologically inspired model also shows how high-level noise-invariant responses could be created from neural responses typically found in primary auditory cortex.

  19. [A comparison of time resolution among auditory, tactile and promontory electrical stimulation--superiority of cochlear implants as human communication aids].

    Science.gov (United States)

    Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T

    1992-09-01

    Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.

  20. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  1. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  2. Auditory cortex involvement in emotional learning and memory.

    Science.gov (United States)

    Grosso, A; Cambiaghi, M; Concina, G; Sacco, T; Sacchetti, B

    2015-07-23

    Emotional memories represent the core of human and animal life and drive future choices and behaviors. Early research involving brain lesion studies in animals lead to the idea that the auditory cortex participates in emotional learning by processing the sensory features of auditory stimuli paired with emotional consequences and by transmitting this information to the amygdala. Nevertheless, electrophysiological and imaging studies revealed that, following emotional experiences, the auditory cortex undergoes learning-induced changes that are highly specific, associative and long lasting. These studies suggested that the role played by the auditory cortex goes beyond stimulus elaboration and transmission. Here, we discuss three major perspectives created by these data. In particular, we analyze the possible roles of the auditory cortex in emotional learning, we examine the recruitment of the auditory cortex during early and late memory trace encoding, and finally we consider the functional interplay between the auditory cortex and subcortical nuclei, such as the amygdala, that process affective information. We conclude that, starting from the early phase of memory encoding, the auditory cortex has a more prominent role in emotional learning, through its connections with subcortical nuclei, than is typically acknowledged. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  3. Visual-induced expectations modulate auditory cortical responses

    Directory of Open Access Journals (Sweden)

    Virginie evan Wassenhove

    2015-02-01

    Full Text Available Active sensing has important consequences on multisensory processing (Schroeder et al. 2010. Here, we asked whether in the absence of saccades, the position of the eyes and the timing of transient colour changes of visual stimuli could selectively affect the excitability of auditory cortex by predicting the where and the when of a sound, respectively. Human participants were recorded with magnetoencephalography (MEG while maintaining the position of their eyes on the left, right, or centre of the screen. Participants counted colour changes of the fixation cross while neglecting sounds which could be presented to the left, right or both ears. First, clear alpha power increases were observed in auditory cortices, consistent with participants’ attention directed to visual inputs. Second, colour changes elicited robust modulations of auditory cortex responses (when prediction seen as ramping activity, early alpha phase-locked responses, and enhanced high-gamma band responses in the contralateral side of sound presentation. Third, no modulations of auditory evoked or oscillatory activity were found to be specific to eye position. Altogether, our results suggest that visual transience can automatically elicit a prediction of when a sound will occur by changing the excitability of auditory cortices irrespective of the attended modality, eye position or spatial congruency of auditory and visual events. To the contrary, auditory cortical responses were not significantly affected by eye position suggesting that where predictions may require active sensing or saccadic reset to modulate auditory cortex responses, notably in the absence of spatial orientation to sounds.

  4. Plasticity in the Primary Auditory Cortex, Not What You Think it is: Implications for Basic and Clinical Auditory Neuroscience

    Science.gov (United States)

    Weinberger, Norman M.

    2013-01-01

    Standard beliefs that the function of the primary auditory cortex (A1) is the analysis of sound have proven to be incorrect. Its involvement in learning, memory and other complex processes in both animals and humans is now well-established, although often not appreciated. Auditory coding is strongly modifed by associative learning, evident as associative representational plasticity (ARP) in which the representation of an acoustic dimension, like frequency, is re-organized to emphasize a sound that has become behaviorally important. For example, the frequency tuning of a cortical neuron can be shifted to match that of a significant sound and the representational area of sounds that acquire behavioral importance can be increased. ARP depends on the learning strategy used to solve an auditory problem and the increased cortical area confers greater strength of auditory memory. Thus, primary auditory cortex is involved in cognitive processes, transcending its assumed function of auditory stimulus analysis. The implications for basic neuroscience and clinical auditory neuroscience are presented and suggestions for remediation of auditory processing disorders are introduced. PMID:25356375

  5. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  6. Application of Savitzky-Golay differentiation filters and Fourier functions to simultaneous determination of cefepime and the co-administered drug, levofloxacin, in spiked human plasma

    Science.gov (United States)

    Abdel-Aziz, Omar; Abdel-Ghany, Maha F.; Nagi, Reham; Abdel-Fattah, Laila

    2015-03-01

    The present work is concerned with simultaneous determination of cefepime (CEF) and the co-administered drug, levofloxacin (LEV), in spiked human plasma by applying a new approach, Savitzky-Golay differentiation filters, and combined trigonometric Fourier functions to their ratio spectra. The different parameters associated with the calculation of Savitzky-Golay and Fourier coefficients were optimized. The proposed methods were validated and applied for determination of the two drugs in laboratory prepared mixtures and spiked human plasma. The results were statistically compared with reported HPLC methods and were found accurate and precise.

  7. Application of Savitzky-Golay differentiation filters and Fourier functions to simultaneous determination of cefepime and the co-administered drug, levofloxacin, in spiked human plasma.

    Science.gov (United States)

    Abdel-Aziz, Omar; Abdel-Ghany, Maha F; Nagi, Reham; Abdel-Fattah, Laila

    2015-03-15

    The present work is concerned with simultaneous determination of cefepime (CEF) and the co-administered drug, levofloxacin (LEV), in spiked human plasma by applying a new approach, Savitzky-Golay differentiation filters, and combined trigonometric Fourier functions to their ratio spectra. The different parameters associated with the calculation of Savitzky-Golay and Fourier coefficients were optimized. The proposed methods were validated and applied for determination of the two drugs in laboratory prepared mixtures and spiked human plasma. The results were statistically compared with reported HPLC methods and were found accurate and precise. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Assay of hybrid ribonuclease using a membrane filter-immobilized synthetic hybrid: application to the human leukemic cell

    International Nuclear Information System (INIS)

    Papaphilis, A.D.; Kamper, E.F.

    1985-01-01

    A method for assaying hybrid ribonuclease has been devised which utilizes as substrate the synthetic hybrid [ 3 H]polyriboadenylic acid [poly(rA)]:polydeoxythymidylic acid [poly(dT)] immobilized on the solid matrix of nitrocellulose filters. The hybridization on filter of [ 3 H]poly(rA) to poly(dT) has been explored in terms of efficacy of the process and the response of the product to RNase H. A pulse of uv irradiation of poly(dT) while in dry state on the filter increased its firm binding to the filter in a concentration-dependent manner, resulting in a concomitant increase of the yield of hybrid formation. The filter-immobilized hybrid was 95% resistant to RNase A but sensitive to RNase H. When stored in toluene in the cold the hybrid maintained its stability for over 6 months, as judged by its resistance to RNase A. The method offers a number of advantages over assays that use solution hybrids as substrates and was readily applicable in the screening of leukemic patients, in the leukocytes of which it has demonstrated increased RNase H levels

  9. Filter apparatus

    International Nuclear Information System (INIS)

    Butterworth, D.J.

    1980-01-01

    This invention relates to liquid filters, precoated by replaceable powders, which are used in the production of ultra pure water required for steam generation of electricity. The filter elements are capable of being installed and removed by remote control so that they can be used in nuclear power reactors. (UK)

  10. Subcortical pathways: Towards a better understanding of auditory disorders.

    Science.gov (United States)

    Felix, Richard A; Gourévitch, Boris; Portfors, Christine V

    2018-05-01

    Hearing loss is a significant problem that affects at least 15% of the population. This percentage, however, is likely significantly higher because of a variety of auditory disorders that are not identifiable through traditional tests of peripheral hearing ability. In these disorders, individuals have difficulty understanding speech, particularly in noisy environments, even though the sounds are loud enough to hear. The underlying mechanisms leading to such deficits are not well understood. To enable the development of suitable treatments to alleviate or prevent such disorders, the affected processing pathways must be identified. Historically, mechanisms underlying speech processing have been thought to be a property of the auditory cortex and thus the study of auditory disorders has largely focused on cortical impairments and/or cognitive processes. As we review here, however, there is strong evidence to suggest that, in fact, deficits in subcortical pathways play a significant role in auditory disorders. In this review, we highlight the role of the auditory brainstem and midbrain in processing complex sounds and discuss how deficits in these regions may contribute to auditory dysfunction. We discuss current research with animal models of human hearing and then consider human studies that implicate impairments in subcortical processing that may contribute to auditory disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Multiple time scales of adaptation in auditory cortex neurons.

    Science.gov (United States)

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  12. Novel Hypothesis to Explain Why SGLT2 Inhibitors Inhibit Only 30–50% of Filtered Glucose Load in Humans

    Science.gov (United States)

    Abdul-Ghani, Muhammad A.; DeFronzo, Ralph A.; Norton, Luke

    2013-01-01

    Inhibitors of sodium-glucose cotransporter 2 (SGLT2) are a novel class of antidiabetes drugs, and members of this class are under various stages of clinical development for the management of type 2 diabetes mellitus (T2DM). It is widely accepted that SGLT2 is responsible for >80% of the reabsorption of the renal filtered glucose load. However, maximal doses of SGLT2 inhibitors fail to inhibit >50% of the filtered glucose load. Because the clinical efficacy of this group of drugs is entirely dependent on the amount of glucosuria produced, it is important to understand why SGLT2 inhibitors inhibit <50% of the filtered glucose load. In this Perspective, we provide a novel hypothesis that explains this apparent puzzle and discuss some of the clinical implications inherent in this hypothesis. PMID:24065789

  13. Novel hypothesis to explain why SGLT2 inhibitors inhibit only 30-50% of filtered glucose load in humans.

    Science.gov (United States)

    Abdul-Ghani, Muhammad A; DeFronzo, Ralph A; Norton, Luke

    2013-10-01

    Inhibitors of sodium-glucose cotransporter 2 (SGLT2) are a novel class of antidiabetes drugs, and members of this class are under various stages of clinical development for the management of type 2 diabetes mellitus (T2DM). It is widely accepted that SGLT2 is responsible for >80% of the reabsorption of the renal filtered glucose load. However, maximal doses of SGLT2 inhibitors fail to inhibit >50% of the filtered glucose load. Because the clinical efficacy of this group of drugs is entirely dependent on the amount of glucosuria produced, it is important to understand why SGLT2 inhibitors inhibit <50% of the filtered glucose load. In this Perspective, we provide a novel hypothesis that explains this apparent puzzle and discuss some of the clinical implications inherent in this hypothesis.

  14. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  15. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  16. Different DNA damage response of cis and trans isomers of commonly used UV filter after the exposure on adult human liver stem cells and human lymphoblastoid cells.

    Science.gov (United States)

    Sharma, Anežka; Bányiová, Katarína; Babica, Pavel; El Yamani, Naouale; Collins, Andrew Richard; Čupr, Pavel

    2017-09-01

    2-ethylhexyl 4-methoxycinnamate (EHMC), used in many categories of personal care products (PCPs), is one of the most discussed ultraviolet filters because of its endocrine-disrupting effects. EHMC is unstable in sunlight and can be transformed from trans-EHMC to emergent cis-EHMC. Toxicological studies are focusing only on trans-EHMC; thus the toxicological data for cis-EHMC are missing. In this study, the in vitro genotoxic effects of trans- and cis-EHMC on adult human liver stem cells HL1-hT1 and human-derived lymphoblastoid cells TK-6 using a high-throughput comet assay were studied. TK-6 cells treated with cis-EHMC showed a high level of DNA damage when compared to untreated cells in concentrations 1.56 to 25μgmL -1 . trans-EHMC showed genotoxicity after exposure to the two highest concentrations 12.5 and 25μgmL -1 . The increase in DNA damage on HL1-hT1 cells induced by cis-EHMC and trans-EHMC was detected at the concentration 25μgmL -1 . The No observed adverse effect level (NOAEL, mg kg -1 bwday -1 ) was determined using a Quantitative in vitro to in vivo extrapolation (QIVIVE) approach: NOAEL trans-EHMC =3.07, NOAEL cis-EHMC =0.30 for TK-6 and NOAEL trans-EHMC =26.46, NOAEL cis-EHMC =20.36 for HL1-hT1. The hazard index (HI) was evaluated by comparing the reference dose (RfD, mgkg -1 bwday -1 ) obtained from our experimental data with the chronic daily intake (CDI) of the female population. Using comet assay experimental data with the more sensitive TK-6 cells, HI cis-EHMC was 7 times higher than HI trans-EHMC . In terms of CDI, relative contributions were; dermal exposure route>oral>inhalation. According to our results we recommend the RfD trans-EHMC =0.20 and RfD cis-EHMC =0.02 for trans-EHMC and cis-EHMC, respectively, to use for human health risk assessment. The significant difference in trans-EHMC and cis-EHMC response points to the need for toxicological reevaluation and application reassessment of both isomers in PCPs. Copyright © 2017 Elsevier B

  17. Evolutionary conservation and neuronal mechanisms of auditory perceptual restoration.

    Science.gov (United States)

    Petkov, Christopher I; Sutter, Mitchell L

    2011-01-01

    Auditory perceptual 'restoration' occurs when the auditory system restores an occluded or masked sound of interest. Behavioral work on auditory restoration in humans began over 50 years ago using it to model a noisy environmental scene with competing sounds. It has become clear that not only humans experience auditory restoration: restoration has been broadly conserved in many species. Behavioral studies in humans and animals provide a necessary foundation to link the insights being obtained from human EEG and fMRI to those from animal neurophysiology. The aggregate of data resulting from multiple approaches across species has begun to clarify the neuronal bases of auditory restoration. Different types of neural responses supporting restoration have been found, supportive of multiple mechanisms working within a species. Yet a general principle has emerged that responses correlated with restoration mimic the response that would have been given to the uninterrupted sound of interest. Using the same technology to study different species will help us to better harness animal models of 'auditory scene analysis' to clarify the conserved neural mechanisms shaping the perceptual organization of sound and to advance strategies to improve hearing in natural environmental settings. © 2010 Elsevier B.V. All rights reserved.

  18. Skin absorption and human exposure estimation of three widely discussed UV filters in sunscreens--In vitro study mimicking real-life consumer habits.

    Science.gov (United States)

    Klimová, Z; Hojerová, J; Beránková, M

    2015-09-01

    Due to health concerns about safety, three UV-filters (Benzophenone-3, BP3, 10%; Ethylhexyl Methoxycinnamate, EHMC, 10%; Butyl Methoxydibenzoylmethane, BMDBM; 5%) were examined in vitro for absorption on full-thickness pig-ear skin, mimicking human in-use conditions. Kinetic profiles confirmed the rapid permeation of BP3; after the first hour of skin (frozen-stored) exposure to 2 mg/cm(2) (W/O sunscreen; recommended but unrealistic amount), about 0.5% of the applied dose passed into the receptor fluid. The absorption rate of filters was higher from W/O than from O/W emulsions. The fresh/frozen-stored skin permeability coefficient (0.83-0.54) for each UV filter was taken into account. Systemic Exposure Dosage of BP3, EHMC, BMDBM for humans as a consequence of (i) whole-body and (ii) face treatment with 0.5 mg/cm(2) of W/O sunscreen for 6-h skin exposure followed by washing and subsequent 18-h permeation (a realistic scenario) were estimated to be (i) 4744, 1032 and 1036 μg/kg-bw/day, and (ii) 153, 33 and 34 μg/kg-bw/day, respectively. From Margin of Safety for BP3, EHMC and BMDBM (i) 42, 485 and 192 as well as (ii) 1307; 15,151 and 5882, respectively, only the value of 42 (<100) for BP3 indicated a possible health risk. Escalation of a phobia towards all organic UV filters is undesirable. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  20. Primate Auditory Recognition Memory Performance Varies With Sound Type

    OpenAIRE

    Chi-Wing, Ng; Bethany, Plakke; Amy, Poremba

    2009-01-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g. social status, kinship, environment),have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition, and/or memory. The present study employs a de...

  1. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  2. Auditory memory can be object based.

    Science.gov (United States)

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  3. External auditory canal leech: a rare case report of paediatric ...

    African Journals Online (AJOL)

    Leeches are blood sucking organism feed on human blood. While human bites are common, they rarely cause human internal infestation. We describe a rare case of a parasitic leech infestation of the External Auditory Canal (EAC). A two month old child presented to the Emergency department with a seven day history of ...

  4. Filter systems

    International Nuclear Information System (INIS)

    Vanin, V.R.

    1990-01-01

    The multidetector systems for high resolution gamma spectroscopy are presented. The observable parameters for identifying nuclides produced simultaneously in the reaction are analysed discussing the efficiency of filter systems. (M.C.K.)

  5. A large scale hearing loss screen reveals an extensive unexplored genetic landscape for auditory dysfunction

    DEFF Research Database (Denmark)

    Bowl, Michael R.; Simon, Michelle M.; Ingham, Neil J.

    2017-01-01

    The developmental and physiological complexity of the auditory system is likely reflected in the underlying set of genes involved in auditory function. In humans, over 150 non-syndromic loci have been identified, and there are more than 400 human genetic syndromes with a hearing loss component. O...

  6. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  7. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  8. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  9. Characteristics of spectro-temporal modulation frequency selectivity in humans.

    Science.gov (United States)

    Oetjen, Arne; Verhey, Jesko L

    2017-03-01

    There is increasing evidence that the auditory system shows frequency selectivity for spectro-temporal modulations. A recent study of the authors has shown spectro-temporal modulation masking patterns that were in agreement with the hypothesis of spectro-temporal modulation filters in the human auditory system [Oetjen and Verhey (2015). J. Acoust. Soc. Am. 137(2), 714-723]. In the present study, that experimental data and additional data were used to model this spectro-temporal frequency selectivity. The additional data were collected to investigate to what extent the spectro-temporal modulation-frequency selectivity results from a combination of a purely temporal amplitude-modulation filter and a purely spectral amplitude-modulation filter. In contrast to the previous study, thresholds were measured for masker and target modulations with opposite directions, i.e., an upward pointing target modulation and a downward pointing masker modulation. The comparison of this data set with previous corresponding data with the same direction from target and masker modulations indicate that a specific spectro-temporal modulation filter is required to simulate all aspects of spectro-temporal modulation frequency selectivity. A model using a modified Gabor filter with a purely temporal and a purely spectral filter predicts the spectro-temporal modulation masking data.

  10. Subthalamic nucleus deep brain stimulation affects distractor interference in auditory working memory.

    Science.gov (United States)

    Camalier, Corrie R; Wang, Alice Y; McIntosh, Lindsey G; Park, Sohee; Neimat, Joseph S

    2017-03-01

    Computational and theoretical accounts hypothesize the basal ganglia play a supramodal "gating" role in the maintenance of working memory representations, especially in preservation from distractor interference. There are currently two major limitations to this account. The first is that supporting experiments have focused exclusively on the visuospatial domain, leaving questions as to whether such "gating" is domain-specific. The second is that current evidence relies on correlational measures, as it is extremely difficult to causally and reversibly manipulate subcortical structures in humans. To address these shortcomings, we examined non-spatial, auditory working memory performance during reversible modulation of the basal ganglia, an approach afforded by deep brain stimulation of the subthalamic nucleus. We found that subthalamic nucleus stimulation impaired auditory working memory performance, specifically in the group tested in the presence of distractors, even though the distractors were predictable and completely irrelevant to the encoding of the task stimuli. This study provides key causal evidence that the basal ganglia act as a supramodal filter in working memory processes, further adding to our growing understanding of their role in cognition. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  12. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  13. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  14. Generalised Filtering

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2010-01-01

    Full Text Available We describe a Bayesian filtering scheme for nonlinear state-space models in continuous time. This scheme is called Generalised Filtering and furnishes posterior (conditional densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on time-varying states and time-invariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a free-energy bound on the model's log-evidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixed-form variational version, and conclude with an illustrative application to a nonlinear state-space model of brain imaging time-series.

  15. Filter This

    Directory of Open Access Journals (Sweden)

    Audrey Barbakoff

    2011-03-01

    Full Text Available In the Library with the Lead Pipe welcomes Audrey Barbakoff, a librarian at the Milwaukee Public Library, and Ahniwa Ferrari, Virtual Experience Manager at the Pierce County Library System in Washington, for a point-counterpoint piece on filtering in libraries. The opinions expressed here are those of the authors, and are not endorsed by their employers. [...

  16. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  17. The importance of individual frequencies of endogenous brain oscillations for auditory cognition - A short review.

    Science.gov (United States)

    Baltus, Alina; Herrmann, Christoph Siegfried

    2016-06-01

    Oscillatory EEG activity in the human brain with frequencies in the gamma range (approx. 30-80Hz) is known to be relevant for a large number of cognitive processes. Interestingly, each subject reveals an individual frequency of the auditory gamma-band response (GBR) that coincides with the peak in the auditory steady state response (ASSR). A common resonance frequency of auditory cortex seems to underlie both the individual frequency of the GBR and the peak of the ASSR. This review sheds light on the functional role of oscillatory gamma activity for auditory processing. For successful processing, the auditory system has to track changes in auditory input over time and store information about past events in memory which allows the construction of auditory objects. Recent findings support the idea of gamma oscillations being involved in the partitioning of auditory input into discrete samples to facilitate higher order processing. We review experiments that seem to suggest that inter-individual differences in the resonance frequency are behaviorally relevant for gap detection and speech processing. A possible application of these resonance frequencies for brain computer interfaces is illustrated with regard to optimized individual presentation rates for auditory input to correspond with endogenous oscillatory activity. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  19. Multichannel Spatial Auditory Display for Speed Communications

    Science.gov (United States)

    Begault, Durand R.; Erbe, Tom

    1994-01-01

    A spatial auditory display for multiple speech communications was developed at NASA/Ames Research Center. Input is spatialized by the use of simplifiedhead-related transfer functions, adapted for FIR filtering on Motorola 56001 digital signal processors. Hardware and firmware design implementations are overviewed for the initial prototype developed for NASA-Kennedy Space Center. An adaptive staircase method was used to determine intelligibility levels of four-letter call signs used by launch personnel at NASA against diotic speech babble. Spatial positions at 30 degree azimuth increments were evaluated. The results from eight subjects showed a maximum intelligibility improvement of about 6-7 dB when the signal was spatialized to 60 or 90 degree azimuth positions.

  20. Reconstruction and analysis of transcription factor-miRNA co-regulatory feed-forward loops in human cancers using filter-wrapper feature selection.

    Directory of Open Access Journals (Sweden)

    Chen Peng

    Full Text Available BACKGROUND: As one of the most common types of co-regulatory motifs, feed-forward loops (FFLs control many cell functions and play an important role in human cancers. Therefore, it is crucial to reconstruct and analyze cancer-related FFLs that are controlled by transcription factor (TF and microRNA (miRNA simultaneously, in order to find out how miRNAs and TFs cooperate with each other in cancer cells and how they contribute to carcinogenesis. Current FFL studies rely on predicted regulation information and therefore suffer the false positive issue in prediction results. More critically, FFLs generated by existing approaches cannot represent the dynamic and conditional regulation relationship under different experimental conditions. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we proposed a novel filter-wrapper feature selection method to accurately identify co-regulatory mechanism by incorporating prior information from predicted regulatory interactions with parallel miRNA/mRNA expression datasets. By applying this method, we reconstructed 208 and 110 TF-miRNA co-regulatory FFLs from human pan-cancer and prostate datasets, respectively. Further analysis of these cancer-related FFLs showed that the top-ranking TF STAT3 and miRNA hsa-let-7e are key regulators implicated in human cancers, which have regulated targets significantly enriched in cellular process regulations and signaling pathways that are involved in carcinogenesis. CONCLUSIONS/SIGNIFICANCE: In this study, we introduced an efficient computational approach to reconstruct co-regulatory FFLs by accurately identifying gene co-regulatory interactions. The strength of the proposed feature selection method lies in the fact it can precisely filter out false positives in predicted regulatory interactions by quantitatively modeling the complex co-regulation of target genes mediated by TFs and miRNAs simultaneously. Moreover, the proposed feature selection method can be generally applied to

  1. Temporal Sequence of Visuo-Auditory Interaction in Multiple Areas of the Guinea Pig Visual Cortex

    Science.gov (United States)

    Nishimura, Masataka; Song, Wen-Jie

    2012-01-01

    Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1). Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction. PMID:23029483

  2. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  3. Primate auditory recognition memory performance varies with sound type.

    Science.gov (United States)

    Ng, Chi-Wing; Plakke, Bethany; Poremba, Amy

    2009-10-01

    Neural correlates of auditory processing, including for species-specific vocalizations that convey biological and ethological significance (e.g., social status, kinship, environment), have been identified in a wide variety of areas including the temporal and frontal cortices. However, few studies elucidate how non-human primates interact with these vocalization signals when they are challenged by tasks requiring auditory discrimination, recognition and/or memory. The present study employs a delayed matching-to-sample task with auditory stimuli to examine auditory memory performance of rhesus macaques (Macaca mulatta), wherein two sounds are determined to be the same or different. Rhesus macaques seem to have relatively poor short-term memory with auditory stimuli, and we examine if particular sound types are more favorable for memory performance. Experiment 1 suggests memory performance with vocalization sound types (particularly monkey), are significantly better than when using non-vocalization sound types, and male monkeys outperform female monkeys overall. Experiment 2, controlling for number of sound exemplars and presentation pairings across types, replicates Experiment 1, demonstrating better performance or decreased response latencies, depending on trial type, to species-specific monkey vocalizations. The findings cannot be explained by acoustic differences between monkey vocalizations and the other sound types, suggesting the biological, and/or ethological meaning of these sounds are more effective for auditory memory. 2009 Elsevier B.V.

  4. Auditory white noise reduces age-related fluctuations in balance.

    Science.gov (United States)

    Ross, J M; Will, O J; McGann, Z; Balasubramaniam, R

    2016-09-06

    Fall prevention technologies have the potential to improve the lives of older adults. Because of the multisensory nature of human balance control, sensory therapies, including some involving tactile and auditory noise, are being explored that might reduce increased balance variability due to typical age-related sensory declines. Auditory white noise has previously been shown to reduce postural sway variability in healthy young adults. In the present experiment, we examined this treatment in young adults and typically aging older adults. We measured postural sway of healthy young adults and adults over the age of 65 years during silence and auditory white noise, with and without vision. Our results show reduced postural sway variability in young and older adults with auditory noise, even in the absence of vision. We show that vision and noise can reduce sway variability for both feedback-based and exploratory balance processes. In addition, we show changes with auditory noise in nonlinear patterns of sway in older adults that reflect what is more typical of young adults, and these changes did not interfere with the typical random walk behavior of sway. Our results suggest that auditory noise might be valuable for therapeutic and rehabilitative purposes in older adults with typical age-related balance variability. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  6. Ultraviolet light induces double-strand breaks in DNA of cultured human P3 cells as measured by neutral filter elution

    International Nuclear Information System (INIS)

    Peak, J.G.; Peak, M.J.

    1990-01-01

    Neutral filter elution at pH 7.2 and 9.6 was used to measure the induction of DNA lesions in human P3 teratocarcinoma cells by monochromatic 254-, 270-, 313-, 334-, 334-,365-, and 405-nm radiation and by 60 gamma rays. In this assay DNA double-strand breaks (dsb) increase the rate of elution of DNA from cell lysates on a filter. Yields of dsb as measured by this procedure were determined by using a calibration of the assay that correlates elution parameters with number of dsb caused by disintegration of 125 I incorporated into the DNA. Analysis of fluence responses obtained by using the calibrated assay indicated that the number of dsb induced per dalton of DNA as measured by this assay is proportional to the square of the fluence at all the energies of radiation studied, implying that the induction of these lesions may be a two-hit event. Analysis of the relative efficiencies for the induction of dsb by ultraviolet radiation, corrected for quantum efficiency, revealed a spectrum that coincided closely with that for the induction of single-strand breaks (ssb) in the same cells, having a close fit with the spectrum of nucleic acid in the UVC and UVB region below 313 nm, and a shoulder in the UVA region. It was calculated, however, that there may be too few ssb for dsb to result from randomly distributed closely opposed ssb. (author)

  7. Diagnostic Accuracy and Feasibility of Serological Tests on Filter Paper Samples for Outbreak Detection of T.b. gambiense Human African Trypanosomiasis

    Science.gov (United States)

    Hasker, Epco; Lutumba, Pascal; Mumba, Dieudonné; Lejon, Veerle; Büscher, Phillipe; Kande, Victor; Muyembe, Jean Jacques; Menten, Joris; Robays, Jo; Boelaert, Marleen

    2010-01-01

    Control of human African trypanosomiasis (HAT) in the Democratic Republic of Congo is based on mass population screening by mobile teams; a costly and labor-intensive approach. We hypothesized that blood samples collected on filter paper by village health workers and processed in a central laboratory might be a cost-effective alternative. We estimated sensitivity and specificity of micro-card agglutination test for trypanosomiasis (micro-CATT) and enzyme-linked immunosorbent assay (ELISA)/T.b. gambiense on filter paper samples compared with parasitology-based case classification and used the results in a Monte Carlo simulation of a lot quality assurance sampling (LQAS) approach. Micro-CATT and ELISA/T.b. gambiense showed acceptable sensitivity (92.7% [95% CI 87.4–98.0%] and 82.2% [95% CI 75.3–90.4%]) and very high specificity (99.4% [95% CI 99.0–99.9%] and 99.8% [95% CI 99.5–100%]), respectively. Conditional on high sample size per lot (≥ 60%), both tests could reliably distinguish a 2% from a zero prevalence at village level. Alternatively, these tests could be used to identify individual HAT suspects for subsequent confirmation. PMID:20682885

  8. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  9. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  10. Abnormalities in auditory efferent activities in children with selective mutism.

    Science.gov (United States)

    Muchnik, Chava; Ari-Even Roth, Daphne; Hildesheimer, Minka; Arie, Miri; Bar-Haim, Yair; Henkin, Yael

    2013-01-01

    Two efferent feedback pathways to the auditory periphery may play a role in monitoring self-vocalization: the middle-ear acoustic reflex (MEAR) and the medial olivocochlear bundle (MOCB) reflex. Since most studies regarding the role of auditory efferent activity during self-vocalization were conducted in animals, human data are scarce. The working premise of the current study was that selective mutism (SM), a rare psychiatric disorder characterized by consistent failure to speak in specific social situations despite the ability to speak normally in other situations, may serve as a human model for studying the potential involvement of auditory efferent activity during self-vocalization. For this purpose, auditory efferent function was assessed in a group of 31 children with SM and compared to that of a group of 31 normally developing control children (mean age 8.9 and 8.8 years, respectively). All children exhibited normal hearing thresholds and type A tympanograms. MEAR and MOCB functions were evaluated by means of acoustic reflex thresholds and decay functions and the suppression of transient-evoked otoacoustic emissions, respectively. Auditory afferent function was tested by means of auditory brainstem responses (ABR). Results indicated a significantly higher proportion of children with abnormal MEAR and MOCB function in the SM group (58.6 and 38%, respectively) compared to controls (9.7 and 8%, respectively). The prevalence of abnormal MEAR and/or MOCB function was significantly higher in the SM group (71%) compared to controls (16%). Intact afferent function manifested in normal absolute and interpeak latencies of ABR components in all children. The finding of aberrant efferent auditory function in a large proportion of children with SM provides further support for the notion that MEAR and MOCB may play a significant role in the process of self-vocalization. © 2013 S. Karger AG, Basel.

  11. Bag filters

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, M; Komeda, I; Takizaki, K

    1982-01-01

    Bag filters are widely used throughout the cement industry for recovering raw materials and products and for improving the environment. Their general mechanism, performance and advantages are shown in a classification table, and there are comparisons and explanations. The outer and inner sectional construction of the Shinto ultra-jet collector for pulverized coal is illustrated and there are detailed descriptions of dust cloud prevention, of measures used against possible sources of ignition, of oxygen supply and of other topics. Finally, explanations are given of matters that require careful and comprehensive study when selecting equipment.

  12. Digital filters

    CERN Document Server

    Hamming, Richard W

    1997-01-01

    Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the s

  13. Age effects and normative data on a Dutch test battery for auditory processing disorders.

    NARCIS (Netherlands)

    Neijenhuis, C.A.M.; Snik, A.F.M.; Priester, G.; Kordenoordt, S. van; Broek, P. van den

    2002-01-01

    A test battery compiled to diagnose auditory processing disorders (APDs) in an adult population was used on a population of 9-16-year-old children. The battery consisted of eight tests (words -in noise, filtered speech, binaural fusion, dichotic digits, frequency and duration patterns, backward

  14. Relation between derived-band auditory brainstem response latencies and behavioral frequency selectivity

    DEFF Research Database (Denmark)

    Strelcyk, Olaf; Christoforidis, Dimitrios; Dau, Torsten

    2009-01-01

    response times. For the same listeners, auditory-filter bandwidths at 2 kHz were estimated using a behavioral notched-noise masking paradigm. Generally, shorter derived-band latencies were observed for the HI than for the NH listeners. Only at low click sensation levels, prolonged latencies were obtained...

  15. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. The Influence of Presentation Method on Auditory Length Perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  17. The influence of presentation method on auditory length perception

    DEFF Research Database (Denmark)

    Kirkwood, Brent Christopher

    2005-01-01

    Humans are capable of hearing the lengths of wooden rods dropped onto hard floors. In an attempt to understand the influence of the stimulus presentation method for testing this kind of everyday listening task, listener performance was compared for three presentation methods in an auditory length...

  18. Auditory, Tactile, and Audiotactile Information Processing Following Visual Deprivation

    Science.gov (United States)

    Occelli, Valeria; Spence, Charles; Zampini, Massimiliano

    2013-01-01

    We highlight the results of those studies that have investigated the plastic reorganization processes that occur within the human brain as a consequence of visual deprivation, as well as how these processes give rise to behaviorally observable changes in the perceptual processing of auditory and tactile information. We review the evidence showing…

  19. Deviance-Related Responses along the Auditory Hierarchy: Combined FFR, MLR and MMN Evidence

    Science.gov (United States)

    Shiga, Tetsuya; Althen, Heike; Cornella, Miriam; Zarnowiec, Katarzyna; Yabe, Hirooki; Escera, Carles

    2015-01-01

    The mismatch negativity (MMN) provides a correlate of automatic auditory discrimination in human auditory cortex that is elicited in response to violation of any acoustic regularity. Recently, deviance-related responses were found at much earlier cortical processing stages as reflected by the middle latency response (MLR) of the auditory evoked potential, and even at the level of the auditory brainstem as reflected by the frequency following response (FFR). However, no study has reported deviance-related responses in the FFR, MLR and long latency response (LLR) concurrently in a single recording protocol. Amplitude-modulated (AM) sounds were presented to healthy human participants in a frequency oddball paradigm to investigate deviance-related responses along the auditory hierarchy in the ranges of FFR, MLR and LLR. AM frequency deviants modulated the FFR, the Na and Nb components of the MLR, and the LLR eliciting the MMN. These findings demonstrate that it is possible to elicit deviance-related responses at three different levels (FFR, MLR and LLR) in one single recording protocol, highlight the involvement of the whole auditory hierarchy in deviance detection and have implications for cognitive and clinical auditory neuroscience. Moreover, the present protocol provides a new research tool into clinical neuroscience so that the functional integrity of the auditory novelty system can now be tested as a whole in a range of clinical populations where the MMN was previously shown to be defective. PMID:26348628

  20. Anatomy, Physiology and Function of the Auditory System

    Science.gov (United States)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  1. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  2. Association between language development and auditory processing disorders

    Directory of Open Access Journals (Sweden)

    Caroline Nunes Rocha-Muniz

    2014-06-01

    Full Text Available INTRODUCTION: It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway ;comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. AIM: To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI, relating these findings to those obtained in children with auditory processing disorder (APD and typical development (TD. MATERIAL AND METHODS: Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. RESULTS: The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. CONCLUSION: This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing are different between auditory processing and speech disorders.

  3. Selective Attention to Auditory Memory Neurally Enhances Perceptual Precision.

    Science.gov (United States)

    Lim, Sung-Joo; Wöstmann, Malte; Obleser, Jonas

    2015-12-09

    Selective attention to a task-relevant stimulus facilitates encoding of that stimulus into a working memory representation. It is less clear whether selective attention also improves the precision of a stimulus already represented in memory. Here, we investigate the behavioral and neural dynamics of selective attention to representations in auditory working memory (i.e., auditory objects) using psychophysical modeling and model-based analysis of electroencephalographic signals. Human listeners performed a syllable pitch discrimination task where two syllables served as to-be-encoded auditory objects. Valid (vs neutral) retroactive cues were presented during retention to allow listeners to selectively attend to the to-be-probed auditory object in memory. Behaviorally, listeners represented auditory objects in memory more precisely (expressed by steeper slopes of a psychometric curve) and made faster perceptual decisions when valid compared to neutral retrocues were presented. Neurally, valid compared to neutral retrocues elicited a larger frontocentral sustained negativity in the evoked potential as well as enhanced parietal alpha/low-beta oscillatory power (9-18 Hz) during memory retention. Critically, individual magnitudes of alpha oscillatory power (7-11 Hz) modulation predicted the degree to which valid retrocues benefitted individuals' behavior. Our results indicate that selective attention to a specific object in auditory memory does benefit human performance not by simply reducing memory load, but by actively engaging complementary neural resources to sharpen the precision of the task-relevant object in memory. Can selective attention improve the representational precision with which objects are held in memory? And if so, what are the neural mechanisms that support such improvement? These issues have been rarely examined within the auditory modality, in which acoustic signals change and vanish on a milliseconds time scale. Introducing a new auditory memory

  4. Convergent Filter Bases

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2015-09-01

    Full Text Available We are inspired by the work of Henri Cartan [16], Bourbaki [10] (TG. I Filtres and Claude Wagschal [34]. We define the base of filter, image filter, convergent filter bases, limit filter and the filter base of tails (fr: filtre des sections.

  5. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  6. Event-related potentials in auditory backward recognition masking: a new way to study the neurophysiological basis of sensory memory in humans.

    Science.gov (United States)

    Winkler, I; Näätänen, R

    1992-06-22

    Task-irrelevant pairs of short tones were presented to healthy human subjects while electric potentials were recorded from their scalp ('event-related brain potential', ERP). Infrequent increments in the frequency of the first tone of the repetitive tone-pair elicited an extra ERP component termed 'mismatch negativity' (MMN) when the silent interval between the first and second tone of the pair ('inter-tone interval') was long (150, 300, or 400 ms) but not when this interval was short (20 or 50 ms). This effect did not depend on whether the two tones of the tone-pair were presented to the same or to different ears. The present inter-tone interval effect is consistent with the effects of backward-masking on recognition performance in audition, suggesting that the MMN reflects the neurophysiological basis of echoic memory.

  7. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  8. Detection of human papillomavirus among women in Laos: feasibility of using filter paper card and prevalence of high-risk types.

    Science.gov (United States)

    Phongsavan, Keokedthong; Gustavsson, Inger; Marions, Lena; Phengsavanh, Alongkone; Wahlström, Rolf; Gyllensten, Ulf

    2012-10-01

    Persistent infection with high-risk (HR) human papillomavirus (HPV) is a well-recognized cause of cervical cancer, but little is known about the situation in Laos. The aims of the study were to determine the prevalence of HR-HPV among Lao women and to evaluate the use of a filter paper card (FTA Elute Micro Card) for collection of cervical cells in the humid tropical climate. This is a cross-sectional study including 1922 women from 3 provinces in Laos. During a gynecological examination, cervical cells were collected and applied to the FTA card followed by HPV typing using a real-time polymerase chain reaction (PCR)-based assay. Overall, 213 of the 1922 women were positive for HR-HPV (11%). The most common type was the group HPV33/52/58 (3%), followed by the single type 16 (2%) and the group 18/45 (1%), respectively. Only 11 cards (0.6%) did not contain a sufficient amount of genomic DNA for polymerase chain reaction-based analysis. The prevalence of HR-HPV infections in Laos is similar to other Asian countries, and 40% of the women with an HR-HPV infection will be target of the present HPV vaccines. The FTA card is suitable for collection of cervical cells for HR-HPV typing in tropical conditions. This information is important for planning and establishing primary and secondary prevention of cervical cancer in Laos.

  9. Usage of drip drops as stimuli in an auditory P300 BCI paradigm.

    Science.gov (United States)

    Huang, Minqiang; Jin, Jing; Zhang, Yu; Hu, Dewen; Wang, Xingyu

    2018-02-01

    Recently, many auditory BCIs are using beeps as auditory stimuli, while beeps sound unnatural and unpleasant for some people. It is proved that natural sounds make people feel comfortable, decrease fatigue, and improve the performance of auditory BCI systems. Drip drop is a kind of natural sounds that makes humans feel relaxed and comfortable. In this work, three kinds of drip drops were used as stimuli in an auditory-based BCI system to improve the user-friendness of the system. This study explored whether drip drops could be used as stimuli in the auditory BCI system. The auditory BCI paradigm with drip-drop stimuli, which was called the drip-drop paradigm (DP), was compared with the auditory paradigm with beep stimuli, also known as the beep paradigm (BP), in items of event-related potential amplitudes, online accuracies and scores on the likability and difficulty to demonstrate the advantages of DP. DP obtained significantly higher online accuracy and information transfer rate than the BP ( p  < 0.05, Wilcoxon signed test; p  < 0.05, Wilcoxon signed test). Besides, DP obtained higher scores on the likability with no significant difference on the difficulty ( p  < 0.05, Wilcoxon signed test). The results showed that the drip drops were reliable acoustic materials as stimuli in an auditory BCI system.

  10. The Effect of Early Visual Deprivation on the Neural Bases of Auditory Processing.

    Science.gov (United States)

    Guerreiro, Maria J S; Putzar, Lisa; Röder, Brigitte

    2016-02-03

    Transient congenital visual deprivation affects visual and multisensory processing. In contrast, the extent to which it affects auditory processing has not been investigated systematically. Research in permanently blind individuals has revealed brain reorganization during auditory processing, involving both intramodal and crossmodal plasticity. The present study investigated the effect of transient congenital visual deprivation on the neural bases of auditory processing in humans. Cataract-reversal individuals and normally sighted controls performed a speech-in-noise task while undergoing functional magnetic resonance imaging. Although there were no behavioral group differences, groups differed in auditory cortical responses: in the normally sighted group, auditory cortex activation increased with increasing noise level, whereas in the cataract-reversal group, no activation difference was observed across noise levels. An auditory activation of visual cortex was not observed at the group level in cataract-reversal individuals. The present data suggest prevailing auditory processing advantages after transient congenital visual deprivation, even many years after sight restoration. The present study demonstrates that people whose sight was restored after a transient period of congenital blindness show more efficient cortical processing of auditory stimuli (here speech), similarly to what has been observed in congenitally permanently blind individuals. These results underscore the importance of early sensory experience in permanently shaping brain function. Copyright © 2016 the authors 0270-6474/16/361620-11$15.00/0.

  11. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Auditory Attention and Comprehension During a Simulated Night Shift: Effects of Task Characteristics.

    Science.gov (United States)

    Pilcher, June J; Jennings, Kristen S; Phillips, Ginger E; McCubbin, James A

    2016-11-01

    The current study investigated performance on a dual auditory task during a simulated night shift. Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation. Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions. The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult. These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts. The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork. © 2016, Human Factors and Ergonomics Society.

  13. Miniaturized dielectric waveguide filters

    OpenAIRE

    Sandhu, MY; Hunter, IC

    2016-01-01

    Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.

  14. Exploration of auditory P50 gating in schizophrenia by way of difference waves

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2006-01-01

    potentials but here this method along with low frequency filtering is applied exploratory on auditory P50 gating data, previously analyzed in the standard format (reported in Am J Psychiatry 2003, 160:2236-8). The exploration was motivated by the observation during visual peak detection that the AEP waveform......Electroencephalographic measures of information processing encompass both mid-latency evoked potentials like the pre-attentive auditory P50 potential and a host of later more cognitive components like P300 and N400.Difference waves have mostly been employed in studies of later event related...

  15. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  16. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  17. Determination of human-use pharmaceuticals in filtered water by direct aqueous injection: high-performance liquid chromatography/tandem mass spectrometry

    Science.gov (United States)

    Furlong, Edward T.; Noriega, Mary C.; Kanagy, Christopher J.; Kanagy, Leslie K.; Coffey, Laura J.; Burkhardt, Mark R.

    2014-01-01

    This report describes a method for the determination of 110 human-use pharmaceuticals using a 100-microliter aliquot of a filtered water sample directly injected into a high-performance liquid chromatograph coupled to a triple-quadrupole tandem mass spectrometer using an electrospray ionization source operated in the positive ion mode. The pharmaceuticals were separated by using a reversed-phase gradient of formic acid/ammonium formate-modified water and methanol. Multiple reaction monitoring of two fragmentations of the protonated molecular ion of each pharmaceutical to two unique product ions was used to identify each pharmaceutical qualitatively. The primary multiple reaction monitoring precursor-product ion transition was quantified for each pharmaceutical relative to the primary multiple reaction monitoring precursor-product transition of one of 19 isotope-dilution standard pharmaceuticals or the pesticide atrazine, using an exact stable isotope analogue where possible. Each isotope-dilution standard was selected, when possible, for its chemical similarity to the unlabeled pharmaceutical of interest, and added to the sample after filtration but prior to analysis. Method performance for each pharmaceutical was determined for reagent water, groundwater, treated drinking water, surface water, treated wastewater effluent, and wastewater influent sample matrixes that this method will likely be applied to. Each matrix was evaluated in order of increasing complexity to demonstrate (1) the sensitivity of the method in different water matrixes and (2) the effect of sample matrix, particularly matrix enhancement or suppression of the precursor ion signal, on the quantitative determination of pharmaceutical concentrations. Recovery of water samples spiked (fortified) with the suite of pharmaceuticals determined by this method typically was greater than 90 percent in reagent water, groundwater, drinking water, and surface water. Correction for ambient environmental

  18. How do auditory cortex neurons represent communication sounds?

    Science.gov (United States)

    Gaucher, Quentin; Huetz, Chloé; Gourévitch, Boris; Laudanski, Jonathan; Occelli, Florian; Edeline, Jean-Marc

    2013-11-01

    A major goal in auditory neuroscience is to characterize how communication sounds are represented at the cortical level. The present review aims at investigating the role of auditory cortex in the processing of speech, bird songs and other vocalizations, which all are spectrally and temporally highly structured sounds. Whereas earlier studies have simply looked for neurons exhibiting higher firing rates to particular conspecific vocalizations over their modified, artificially synthesized versions, more recent studies determined the coding capacity of temporal spike patterns, which are prominent in primary and non-primary areas (and also in non-auditory cortical areas). In several cases, this information seems to be correlated with the behavioral performance of human or animal subjects, suggesting that spike-timing based coding strategies might set the foundations of our perceptive abilities. Also, it is now clear that the responses of auditory cortex neurons are highly nonlinear and that their responses to natural stimuli cannot be predicted from their responses to artificial stimuli such as moving ripples and broadband noises. Since auditory cortex neurons cannot follow rapid fluctuations of the vocalizations envelope, they only respond at specific time points during communication sounds, which can serve as temporal markers for integrating the temporal and spectral processing taking place at subcortical relays. Thus, the temporal sparse code of auditory cortex neurons can be considered as a first step for generating high level representations of communication sounds independent of the acoustic characteristic of these sounds. This article is part of a Special Issue entitled "Communication Sounds and the Brain: New Directions and Perspectives". Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory.

    Science.gov (United States)

    Kawasaki, Masahiro; Kitajo, Keiichi; Yamaguchi, Yoko

    2014-01-01

    In humans, theta phase (4-8 Hz) synchronization observed on electroencephalography (EEG) plays an important role in the manipulation of mental representations during working memory (WM) tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  1. Fronto-parietal and fronto-temporal theta phase synchronization for visual and auditory-verbal working memory

    Directory of Open Access Journals (Sweden)

    Masahiro eKawasaki

    2014-03-01

    Full Text Available In humans, theta phase (4–8 Hz synchronization observed on electroencephalography (EEG plays an important role in the manipulation of mental representations during working memory (WM tasks; fronto-temporal synchronization is involved in auditory-verbal WM tasks and fronto-parietal synchronization is involved in visual WM tasks. However, whether or not theta phase synchronization is able to select the to-be-manipulated modalities is uncertain. To address the issue, we recorded EEG data from subjects who were performing auditory-verbal and visual WM tasks; we compared the theta synchronizations when subjects performed either auditory-verbal or visual manipulations in separate WM tasks, or performed both two manipulations in the same WM task. The auditory-verbal WM task required subjects to calculate numbers presented by an auditory-verbal stimulus, whereas the visual WM task required subjects to move a spatial location in a mental representation in response to a visual stimulus. The dual WM task required subjects to manipulate auditory-verbal, visual, or both auditory-verbal and visual representations while maintaining auditory-verbal and visual representations. Our time-frequency EEG analyses revealed significant fronto-temporal theta phase synchronization during auditory-verbal manipulation in both auditory-verbal and auditory-verbal/visual WM tasks, but not during visual manipulation tasks. Similarly, we observed significant fronto-parietal theta phase synchronization during visual manipulation tasks, but not during auditory-verbal manipulation tasks. Moreover, we observed significant synchronization in both the fronto-temporal and fronto-parietal theta signals during simultaneous auditory-verbal/visual manipulations. These findings suggest that theta synchronization seems to flexibly connect the brain areas that manipulate WM.

  2. Subthalamic deep brain stimulation improves auditory sensory gating deficit in Parkinson's disease.

    Science.gov (United States)

    Gulberti, A; Hamel, W; Buhmann, C; Boelmans, K; Zittel, S; Gerloff, C; Westphal, M; Engel, A K; Schneider, T R; Moll, C K E

    2015-03-01

    While motor effects of dopaminergic medication and subthalamic nucleus deep brain stimulation (STN-DBS) in Parkinson's disease (PD) patients are well explored, their effects on sensory processing are less well understood. Here, we studied the impact of levodopa and STN-DBS on auditory processing. Rhythmic auditory stimulation (RAS) was presented at frequencies between 1 and 6Hz in a passive listening paradigm. High-density EEG-recordings were obtained before (levodopa ON/OFF) and 5months following STN-surgery (ON/OFF STN-DBS). We compared auditory evoked potentials (AEPs) elicited by RAS in 12 PD patients to those in age-matched controls. Tempo-dependent amplitude suppression of the auditory P1/N1-complex was used as an indicator of auditory gating. Parkinsonian patients showed significantly larger AEP-amplitudes (P1, N1) and longer AEP-latencies (N1) compared to controls. Neither interruption of dopaminergic medication nor of STN-DBS had an immediate effect on these AEPs. However, chronic STN-DBS had a significant effect on abnormal auditory gating characteristics of parkinsonian patients and restored a physiological P1/N1-amplitude attenuation profile in response to RAS with increasing stimulus rates. This differential treatment effect suggests a divergent mode of action of levodopa and STN-DBS on auditory processing. STN-DBS may improve early attentive filtering processes of redundant auditory stimuli, possibly at the level of the frontal cortex. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Selection vector filter framework

    Science.gov (United States)

    Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.

    2003-10-01

    We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.

  4. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. The Auditory Enhancement Effect is Not Reflected in the 80-Hz Auditory Steady-State Response

    OpenAIRE

    Carcagno, Samuele; Plack, Christopher J.; Portron, Arthur; Semal, Catherine; Demany, Laurent

    2014-01-01

    The perceptual salience of a target tone presented in a multitone background is increased by the presentation of a precursor sound consisting of the multitone background alone. It has been proposed that this “enhancement” phenomenon results from an effective amplification of the neural response to the target tone. In this study, we tested this hypothesis in humans, by comparing the auditory steady-state response (ASSR) to a target tone that was enhanced by a precursor sound with the ASSR to a...

  6. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  7. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  8. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  9. Recirculating electric air filter

    Science.gov (United States)

    Bergman, W.

    1985-01-09

    An electric air filter cartridge has a cylindrical inner high voltage electrode, a layer of filter material, and an outer ground electrode formed of a plurality of segments moveably connected together. The outer electrode can be easily opened to remove or insert filter material. Air flows through the two electrodes and the filter material and is exhausted from the center of the inner electrode.

  10. Passive Power Filters

    CERN Document Server

    Künzi, R.

    2015-06-15

    Power converters require passive low-pass filters which are capable of reducing voltage ripples effectively. In contrast to signal filters, the components of power filters must carry large currents or withstand large voltages, respectively. In this paper, three different suitable filter struc tures for d.c./d.c. power converters with inductive load are introduced. The formulas needed to calculate the filter components are derived step by step and practical examples are given. The behaviour of the three discussed filters is compared by means of the examples. P ractical aspects for the realization of power filters are also discussed.

  11. Selective Attention to Visual Stimuli Using Auditory Distractors Is Altered in Alpha-9 Nicotinic Receptor Subunit Knock-Out Mice.

    Science.gov (United States)

    Terreros, Gonzalo; Jorratt, Pascal; Aedo, Cristian; Elgoyhen, Ana Belén; Delano, Paul H

    2016-07-06

    During selective attention, subjects voluntarily focus their cognitive resources on a specific stimulus while ignoring others. Top-down filtering of peripheral sensory responses by higher structures of the brain has been proposed as one of the mechanisms responsible for selective attention. A prerequisite to accomplish top-down modulation of the activity of peripheral structures is the presence of corticofugal pathways. The mammalian auditory efferent system is a unique neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear bundle, and it has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear neurons in selective attention paradigms. Here, we trained wild-type and α-9 nicotinic receptor subunit knock-out (KO) mice, which lack cholinergic transmission between medial olivocochlear neurons and outer hair cells, in a two-choice visual discrimination task and studied the behavioral consequences of adding different types of auditory distractors. In addition, we evaluated the effects of contralateral noise on auditory nerve responses as a measure of the individual strength of the olivocochlear reflex. We demonstrate that KO mice have a reduced olivocochlear reflex strength and perform poorly in a visual selective attention paradigm. These results confirm that an intact medial olivocochlear transmission aids in ignoring auditory distraction during selective attention to visual stimuli. The auditory efferent system is a neural network that originates in the auditory cortex and projects to the cochlear receptor through the olivocochlear system. It has been proposed to function as a top-down filter of peripheral auditory responses during attention to cross-modal stimuli. However, to date, there is no conclusive evidence of the involvement of olivocochlear

  12. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    in a diagnostic rhyme test. The framework was constructed such that discrimination errors originating from the front-end and the back-end were separated. The front-end was fitted to individual listeners with cochlear hearing loss according to non-speech data, and speech data were obtained in the same listeners......A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...

  13. Filter replacement lifetime prediction

    Science.gov (United States)

    Hamann, Hendrik F.; Klein, Levente I.; Manzer, Dennis G.; Marianno, Fernando J.

    2017-10-25

    Methods and systems for predicting a filter lifetime include building a filter effectiveness history based on contaminant sensor information associated with a filter; determining a rate of filter consumption with a processor based on the filter effectiveness history; and determining a remaining filter lifetime based on the determined rate of filter consumption. Methods and systems for increasing filter economy include measuring contaminants in an internal and an external environment; determining a cost of a corrosion rate increase if unfiltered external air intake is increased for cooling; determining a cost of increased air pressure to filter external air; and if the cost of filtering external air exceeds the cost of the corrosion rate increase, increasing an intake of unfiltered external air.

  14. Integrated trimodal SSEP experimental setup for visual, auditory and tactile stimulation

    Science.gov (United States)

    Kuś, Rafał; Spustek, Tomasz; Zieleniewska, Magdalena; Duszyk, Anna; Rogowski, Piotr; Suffczyński, Piotr

    2017-12-01

    Objective. Steady-state evoked potentials (SSEPs), the brain responses to repetitive stimulation, are commonly used in both clinical practice and scientific research. Particular brain mechanisms underlying SSEPs in different modalities (i.e. visual, auditory and tactile) are very complex and still not completely understood. Each response has distinct resonant frequencies and exhibits a particular brain topography. Moreover, the topography can be frequency-dependent, as in case of auditory potentials. However, to study each modality separately and also to investigate multisensory interactions through multimodal experiments, a proper experimental setup appears to be of critical importance. The aim of this study was to design and evaluate a novel SSEP experimental setup providing a repetitive stimulation in three different modalities (visual, tactile and auditory) with a precise control of stimuli parameters. Results from a pilot study with a stimulation in a particular modality and in two modalities simultaneously prove the feasibility of the device to study SSEP phenomenon. Approach. We developed a setup of three separate stimulators that allows for a precise generation of repetitive stimuli. Besides sequential stimulation in a particular modality, parallel stimulation in up to three different modalities can be delivered. Stimulus in each modality is characterized by a stimulation frequency and a waveform (sine or square wave). We also present a novel methodology for the analysis of SSEPs. Main results. Apart from constructing the experimental setup, we conducted a pilot study with both sequential and simultaneous stimulation paradigms. EEG signals recorded during this study were analyzed with advanced methodology based on spatial filtering and adaptive approximation, followed by statistical evaluation. Significance. We developed a novel experimental setup for performing SSEP experiments. In this sense our study continues the ongoing research in this field. On the

  15. Non-Euclidean phasor analysis for quantification of oxidative stress in ex vivo human skin exposed to sun filters using fluorescence lifetime imaging microscopy

    Science.gov (United States)

    Osseiran, Sam; Roider, Elisabeth M.; Wang, Hequn; Suita, Yusuke; Murphy, Michael; Fisher, David E.; Evans, Conor L.

    2017-12-01

    Chemical sun filters are commonly used as active ingredients in sunscreens due to their efficient absorption of ultraviolet (UV) radiation. Yet, it is known that these compounds can photochemically react with UV light and generate reactive oxygen species and oxidative stress in vitro, though this has yet to be validated in vivo. One label-free approach to probe oxidative stress is to measure and compare the relative endogenous fluorescence generated by cellular coenzymes nicotinamide adenine dinucleotides and flavin adenine dinucleotides. However, chemical sun filters are fluorescent, with emissive properties that contaminate endogenous fluorescent signals. To accurately distinguish the source of fluorescence in ex vivo skin samples treated with chemical sun filters, fluorescence lifetime imaging microscopy data were processed on a pixel-by-pixel basis using a non-Euclidean separation algorithm based on Mahalanobis distance and validated on simulated data. Applying this method, ex vivo samples exhibited a small oxidative shift when exposed to sun filters alone, though this shift was much smaller than that imparted by UV irradiation. Given the need for investigative tools to further study the clinical impact of chemical sun filters in patients, the reported methodology may be applied to visualize chemical sun filters and measure oxidative stress in patients' skin.

  16. Investigating the Role of Auditory Feedback in a Multimodal Biking Experience

    DEFF Research Database (Denmark)

    Bruun-Pedersen, Jon Ram; Grani, Francesco; Serafin, Stefania

    2017-01-01

    In this paper, we investigate the role of auditory feedback in affecting perception of effort while biking in a virtual environment. Subjects were biking on a stationary chair bike, while exposed to 3D renditions of a recumbent bike inside a virtual environment (VE). The VE simulated a park...... and was created in the Unity5 engine. While biking, subjects were exposed to 9 kinds of auditory feedback (3 amplitude levels with three different filters) which were continuously triggered corresponding to pedal speed, representing the sound of the wheels and bike/chain mechanics. Subjects were asked to rate...... the perception of exertion using the Borg RPE scale. Results of the experiment showed that most subjects perceived a difference in mechanical resistance from the bike between conditions, but did not consciously notice the variations of the auditory feedback, although these were significantly varied. This points...

  17. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  18. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  19. Optimization of filter loading

    International Nuclear Information System (INIS)

    Turney, J.H.; Gardiner, D.E.; Sacramento Municipal Utility District, Herald, CA)

    1985-01-01

    The introduction of 10 CFR Part 61 has created potential difficulties in the disposal of spent cartridge filters. When this report was prepared, Rancho Seco had no method of packaging and disposing of class B or C filters. This work examined methods to minimize the total operating cost of cartridge filters while maintaining them below the class A limit. It was found that by encapsulating filters in cement the filter operating costs could be minimized

  20. Auditory bones obtained by synchrotron radiation computed tomography at SPring-8

    International Nuclear Information System (INIS)

    Hashimoto, E.; Sugiyama, H.; Maksimenko, A.

    2005-01-01

    A series tomograms and 3D reconstructions of the inner structure of the human auditory bone were obtained for the first time by employing absorption X-ray computed tomography using a synchrotron radiation. The experiment was performed at the very long transport channel beam line BL29XUL, where X-ray were available at 1000m from the source point. This method is great worth to making anatomically auditory structure observations without bursting the specimens. (author)

  1. Extensive Tonotopic Mapping across Auditory Cortex Is Recapitulated by Spectrally Directed Attention and Systematically Related to Cortical Myeloarchitecture.

    Science.gov (United States)

    Dick, Frederic K; Lehet, Matt I; Callaghan, Martina F; Keller, Tim A; Sereno, Martin I; Holt, Lori L

    2017-12-13

    Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R 1 -estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization. SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although

  2. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  4. Changes in otoacoustic emissions during selective auditory and visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2015-01-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703

  5. Auditory analysis for speech recognition based on physiological models

    Science.gov (United States)

    Jeon, Woojay; Juang, Biing-Hwang

    2004-05-01

    To address the limitations of traditional cepstrum or LPC based front-end processing methods for automatic speech recognition, more elaborate methods based on physiological models of the human auditory system may be used to achieve more robust speech recognition in adverse environments. For this purpose, a modified version of a model of the primary auditory cortex featuring a three dimensional mapping of auditory spectra [Wang and Shamma, IEEE Trans. Speech Audio Process. 3, 382-395 (1995)] is adopted and investigated for its use as an improved front-end processing method. The study is conducted in two ways: first, by relating the model's redundant representation to traditional spectral representations and showing that the former not only encompasses information provided by the latter, but also reveals more relevant information that makes it superior in describing the identifying features of speech signals; and second, by observing the statistical features of the representation for various classes of sound to show how different identifying features manifest themselves as specific patterns on the cortical map, thereby becoming a place-coded data set on which detection theory could be applied to simulate auditory perception and cognition.

  6. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  7. Binaural auditory beats affect vigilance performance and mood.

    Science.gov (United States)

    Lane, J D; Kasian, S J; Owens, J E; Marsh, G R

    1998-01-01

    When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.

  8. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  9. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  10. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  11. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  12. Caution and Warning Alarm Design and Evaluation for NASA CEV Auditory Displays: SHFE Information Presentation Directed Research Project (DRPP) report 12.07

    Science.gov (United States)

    Begault, Durand R.; Godfroy, Martine; Sandor, Aniko; Holden, Kritina

    2008-01-01

    The design of caution-warning signals for NASA s Crew Exploration Vehicle (CEV) and other future spacecraft will be based on both best practices based on current research and evaluation of current alarms. A design approach is presented based upon cross-disciplinary examination of psychoacoustic research, human factors experience, aerospace practices, and acoustical engineering requirements. A listening test with thirteen participants was performed involving ranking and grading of current and newly developed caution-warning stimuli under three conditions: (1) alarm levels adjusted for compliance with ISO 7731, "Danger signals for work places - Auditory Danger Signals", (2) alarm levels adjusted to an overall 15 dBA s/n ratio and (3) simulated codec low-pass filtering. Questionnaire data yielded useful insights regarding cognitive associations with the sounds.

  13. A Brief Period of Postnatal Visual Deprivation Alters the Balance between Auditory and Visual Attention.

    Science.gov (United States)

    de Heering, Adélaïde; Dormal, Giulia; Pelland, Maxime; Lewis, Terri; Maurer, Daphne; Collignon, Olivier

    2016-11-21

    Is a short and transient period of visual deprivation early in life sufficient to induce lifelong changes in how we attend to, and integrate, simple visual and auditory information [1, 2]? This question is of crucial importance given the recent demonstration in both animals and humans that a period of blindness early in life permanently affects the brain networks dedicated to visual, auditory, and multisensory processing [1-16]. To address this issue, we compared a group of adults who had been treated for congenital bilateral cataracts during early infancy with a group of normally sighted controls on a task requiring simple detection of lateralized visual and auditory targets, presented alone or in combination. Redundancy gains obtained from the audiovisual conditions were similar between groups and surpassed the reaction time distribution predicted by Miller's race model. However, in comparison to controls, cataract-reversal patients were faster at processing simple auditory targets and showed differences in how they shifted attention across modalities. Specifically, they were faster at switching attention from visual to auditory inputs than in the reverse situation, while an opposite pattern was observed for controls. Overall, these results reveal that the absence of visual input during the first months of life does not prevent the development of audiovisual integration but enhances the salience of simple auditory inputs, leading to a different crossmodal distribution of attentional resources between auditory and visual stimuli. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  15. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  16. Auditory Pattern Memory

    Science.gov (United States)

    1990-10-31

    Creelman , 1962; Getty, 1975; Divenyi and Danner, kin et aL (1982) jitter-detection paradigm. An advantage of 1977; Divenyi and Sachs, 1978; and Allen...discrimination of tonal se- Creelman . C. D. (1962). "’Human discrimination ofauditory duration." 3. quences.’" J. Acoust. Soc. Am. 82.1218-1226. Acoust...single marked intervals (Abel, 1972a,b; Creelman , 1962; Getty, 1975; Divenyi and Danner, 1977; Divenyi and Sachs, 1978; Espinoza-Varas and Jamieson

  17. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  18. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    DEFF Research Database (Denmark)

    Gil Carvajal, Juan Camilo; Cubick, Jens; Santurette, Sébastien

    2016-01-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical...... the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli....

  19. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  20. Laboratory for filter testing

    Energy Technology Data Exchange (ETDEWEB)

    Paluch, W.

    1987-07-01

    Filters used for mine draining in brown coal surface mines are tested by the Mine Draining Department of Poltegor. Laboratory tests of new types of filters developed by Poltegor are analyzed. Two types of tests are used: tests of scale filter models and tests of experimental units of new filters. Design and operation of the test stands used for testing mechanical properties and hydraulic properties of filters for coal mines are described: dimensions, pressure fluctuations, hydraulic equipment. Examples of testing large-diameter filters for brown coal mines are discussed.

  1. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  2. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  3. Auditory-Cortex Short-Term Plasticity Induced by Selective Attention

    Science.gov (United States)

    Jääskeläinen, Iiro P.; Ahveninen, Jyrki

    2014-01-01

    The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance. PMID:24551458

  4. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  5. Perception of stochastically undersampled sound waveforms: a model of auditory deafferentation

    Science.gov (United States)

    Lopez-Poveda, Enrique A.; Barrios, Pablo

    2013-01-01

    Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects. PMID:23882176

  6. Sensory Pollution from Bag Filters, Carbon Filters and Combinations

    DEFF Research Database (Denmark)

    Bekö, Gabriel; Clausen, Geo; Weschler, Charles J.

    2008-01-01

    by an upstream pre-filter (changed monthly), an EU7 filter protected by an upstream activated carbon (AC) filter, and EU7 filters with an AC filter either downstream or both upstream and downstream. In addition, two types of stand-alone combination filters were evaluated: a bag-type fiberglass filter...... that contained AC and a synthetic fiber cartridge filter that contained AC. Air that had passed through used filters was most acceptable for those sets in which an AC filter was used downstream of the particle filter. Comparable air quality was achieved with the stand-alone bag filter that contained AC...

  7. HEPA Filter Vulnerability Assessment

    International Nuclear Information System (INIS)

    GUSTAVSON, R.D.

    2000-01-01

    This assessment of High Efficiency Particulate Air (HEPA) filter vulnerability was requested by the USDOE Office of River Protection (ORP) to satisfy a DOE-HQ directive to evaluate the effect of filter degradation on the facility authorization basis assumptions. Within the scope of this assessment are ventilation system HEPA filters that are classified as Safety-Class (SC) or Safety-Significant (SS) components that perform an accident mitigation function. The objective of the assessment is to verify whether HEPA filters that perform a safety function during an accident are likely to perform as intended to limit release of hazardous or radioactive materials, considering factors that could degrade the filters. Filter degradation factors considered include aging, wetting of filters, exposure to high temperature, exposure to corrosive or reactive chemicals, and exposure to radiation. Screening and evaluation criteria were developed by a site-wide group of HVAC engineers and HEPA filter experts from published empirical data. For River Protection Project (RPP) filters, the only degradation factor that exceeded the screening threshold was for filter aging. Subsequent evaluation of the effect of filter aging on the filter strength was conducted, and the results were compared with required performance to meet the conditions assumed in the RPP Authorization Basis (AB). It was found that the reduction in filter strength due to aging does not affect the filter performance requirements as specified in the AB. A portion of the HEPA filter vulnerability assessment is being conducted by the ORP and is not part of the scope of this study. The ORP is conducting an assessment of the existing policies and programs relating to maintenance, testing, and change-out of HEPA filters used for SC/SS service. This document presents the results of a HEPA filter vulnerability assessment conducted for the River protection project as requested by the DOE Office of River Protection

  8. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  9. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  10. From ear to body: the auditory-motor loop in spatial cognition.

    Science.gov (United States)

    Viaud-Delmon, Isabelle; Warusfel, Olivier

    2014-01-01

    SPATIAL MEMORY IS MAINLY STUDIED THROUGH THE VISUAL SENSORY MODALITY: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.

  11. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    dynamics of human perception in auditory streaming. PMID:23516340

  12. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Directory of Open Access Journals (Sweden)

    Jason A Miranda

    Full Text Available Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  13. Adult plasticity in the subcortical auditory pathway of the maternal mouse.

    Science.gov (United States)

    Miranda, Jason A; Shepard, Kathryn N; McClintock, Shannon K; Liu, Robert C

    2014-01-01

    Subcortical auditory nuclei were traditionally viewed as non-plastic in adulthood so that acoustic information could be stably conveyed to higher auditory areas. Studies in a variety of species, including humans, now suggest that prolonged acoustic training can drive long-lasting brainstem plasticity. The neurobiological mechanisms for such changes are not well understood in natural behavioral contexts due to a relative dearth of in vivo animal models in which to study this. Here, we demonstrate in a mouse model that a natural life experience with increased demands on the auditory system - motherhood - is associated with improved temporal processing in the subcortical auditory pathway. We measured the auditory brainstem response to test whether mothers and pup-naïve virgin mice differed in temporal responses to both broadband and tone stimuli, including ultrasonic frequencies found in mouse pup vocalizations. Mothers had shorter latencies for early ABR peaks, indicating plasticity in the auditory nerve and the cochlear nucleus. Shorter interpeak latency between waves IV and V also suggest plasticity in the inferior colliculus. Hormone manipulations revealed that these cannot be explained solely by estrogen levels experienced during pregnancy and parturition in mothers. In contrast, we found that pup-care experience, independent of pregnancy and parturition, contributes to shortening auditory brainstem response latencies. These results suggest that acoustic experience in the maternal context imparts plasticity on early auditory processing that lasts beyond pup weaning. In addition to establishing an animal model for exploring adult auditory brainstem plasticity in a neuroethological context, our results have broader implications for models of perceptual, behavioral and neural changes that arise during maternity, where subcortical sensorineural plasticity has not previously been considered.

  14. Ceramic water filters impregnated with silver nanoparticles as a point-of-use water-treatment intervention for HIV-positive individuals in Limpopo Province, South Africa: a pilot study of technological performance and human health benefits.

    Science.gov (United States)

    Abebe, Lydia Shawel; Smith, James A; Narkiewicz, Sophia; Oyanedel-Craver, Vinka; Conaway, Mark; Singo, Alukhethi; Amidou, Samie; Mojapelo, Paul; Brant, Julia; Dillingham, Rebecca

    2014-06-01

    Waterborne pathogens present a significant threat to people living with the human immunodeficiency virus (PLWH). This study presents a randomized, controlled trial that evaluates whether a household-level ceramic water filter (CWF) intervention can improve drinking water quality and decrease days of diarrhea in PLWH in rural South Africa. Seventy-four participants were randomized in an intervention group with CWFs and a control group without filters. Participants in the CWF arm received CWFs impregnated with silver nanoparticles and associated safe-storage containers. Water and stool samples were collected at baseline and 12 months. Diarrhea incidence was self-reported weekly for 12 months. The average diarrhea rate in the control group was 0.064 days/week compared to 0.015 days/week in the intervention group (p water and decrease days of diarrhea for PLWH in rural South Africa.

  15. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  16. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Bias aware Kalman filters

    DEFF Research Database (Denmark)

    Drecourt, J.-P.; Madsen, H.; Rosbjerg, Dan

    2006-01-01

    This paper reviews two different approaches that have been proposed to tackle the problems of model bias with the Kalman filter: the use of a colored noise model and the implementation of a separate bias filter. Both filters are implemented with and without feedback of the bias into the model state....... The colored noise filter formulation is extended to correct both time correlated and uncorrelated model error components. A more stable version of the separate filter without feedback is presented. The filters are implemented in an ensemble framework using Latin hypercube sampling. The techniques...... are illustrated on a simple one-dimensional groundwater problem. The results show that the presented filters outperform the standard Kalman filter and that the implementations with bias feedback work in more general conditions than the implementations without feedback. 2005 Elsevier Ltd. All rights reserved....

  19. Simon-nitinol filter

    International Nuclear Information System (INIS)

    Simon, M.; Kim, D.; Porter, D.H.; Kleshinski, S.

    1989-01-01

    This paper discusses a filter that exploits the thermal shape-memory properties of the nitinol alloy to achieve an optimized filter shape and a fine-bore introducer. Experimental methods and materials are given and results are analyzed

  20. MST Filterability Tests

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Burket, P. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Duignan, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  1. Multivoxel Patterns Reveal Functionally Differentiated Networks Underlying Auditory Feedback Processing of Speech

    DEFF Research Database (Denmark)

    Zheng, Zane Z.; Vicente-Grabovetsky, Alejandro; MacDonald, Ewen N.

    2013-01-01

    The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations...... within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while...... human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during...

  2. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z.; Zhang, Fan; Gonçalves, Óscar F.; Fang, Fang; Bi, Yanchao

    2016-01-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus—a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex. PMID:26423461

  3. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  4. 21 CFR 868.5260 - Breathing circuit bacterial filter.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Breathing circuit bacterial filter. 868.5260 Section 868.5260 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES... filter. (a) Identification. A breathing circuit bacterial filter is a device that is intended to remove...

  5. Motion processing after sight restoration: No competition between visual recovery and auditory compensation.

    Science.gov (United States)

    Bottari, Davide; Kekunnaya, Ramesh; Hense, Marlene; Troje, Nikolaus F; Sourav, Suddha; Röder, Brigitte

    2018-02-15

    The present study tested whether or not functional adaptations following congenital blindness are maintained in humans after sight-restoration and whether they interfere with visual recovery. In permanently congenital blind individuals both intramodal plasticity (e.g. changes in auditory cortex) as well as crossmodal plasticity (e.g. an activation of visual cortex by auditory stimuli) have been observed. Both phenomena were hypothesized to contribute to improved auditory functions. For example, it has been shown that early permanently blind individuals outperform sighted controls in auditory motion processing and that auditory motion stimuli elicit activity in typical visual motion areas. Yet it is unknown what happens to these behavioral adaptations and cortical reorganizations when sight is restored, that is, whether compensatory auditory changes are lost and to which degree visual motion processing is reinstalled. Here we employed a combined behavioral-electrophysiological approach in a group of sight-recovery individuals with a history of a transient phase of congenital blindness lasting for several months to several years. They, as well as two control groups, one with visual impairments, one normally sighted, were tested in a visual and an auditory motion discrimination experiment. Task difficulty was manipulated by varying the visual motion coherence and the signal to noise ratio, respectively. The congenital cataract-reversal individuals showed lower performance in the visual global motion task than both control groups. At the same time, they outperformed both control groups in auditory motion processing suggesting that at least some compensatory behavioral adaptation as a consequence of a complete blindness from birth was maintained. Alpha oscillatory activity during the visual task was significantly lower in congenital cataract reversal individuals and they did not show ERPs modulated by visual motion coherence as observed in both control groups. In

  6. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  7. Influence of different envelope maskers on signal recognition and neuronal representation in the auditory system of a grasshopper.

    Directory of Open Access Journals (Sweden)

    Daniela Neuhofer

    Full Text Available BACKGROUND: Animals that communicate by sound face the problem that the signals arriving at the receiver often are degraded and masked by noise. Frequency filters in the receiver's auditory system may improve the signal-to-noise ratio (SNR by excluding parts of the spectrum which are not occupied by the species-specific signals. This solution, however, is hardly amenable to species that produce broad band signals or have ears with broad frequency tuning. In mammals auditory filters exist that work in the temporal domain of amplitude modulations (AM. Do insects also use this type of filtering? PRINCIPAL FINDINGS: Combining behavioural and neurophysiological experiments we investigated whether AM filters may improve the recognition of masked communication signals in grasshoppers. The AM pattern of the sound, its envelope, is crucial for signal recognition in these animals. We degraded the species-specific song by adding random fluctuations to its envelope. Six noise bands were used that differed in their overlap with the spectral content of the song envelope. If AM filters contribute to reduced masking, signal recognition should depend on the degree of overlap between the song envelope spectrum and the noise spectra. Contrary to this prediction, the resistance against signal degradation was the same for five of six masker bands. Most remarkably, the band with the strongest frequency overlap to the natural song envelope (0-100 Hz impaired acceptance of degraded signals the least. To assess the noise filter capacities of single auditory neurons, the changes of spike trains as a function of the masking level were assessed. Increasing levels of signal degradation in different frequency bands led to similar changes in the spike trains in most neurones. CONCLUSIONS: There is no indication that auditory neurones of grasshoppers are specialized to improve the SNR with respect to the pattern of amplitude modulations.

  8. Rotationally invariant correlation filtering

    International Nuclear Information System (INIS)

    Schils, G.F.; Sweeney, D.W.

    1985-01-01

    A method is presented for analyzing and designing optical correlation filters that have tailored rotational invariance properties. The concept of a correlation of an image with a rotation of itself is introduced. A unified theory of rotation-invariant filtering is then formulated. The unified approach describes matched filters (with no rotation invariance) and circular-harmonic filters (with full rotation invariance) as special cases. The continuum of intermediate cases is described in terms of a cyclic convolution operation over angle. The angular filtering approach allows an exact choice for the continuous trade-off between loss of the correlation energy (or specificity regarding the image) and the amount of rotational invariance desired

  9. Determination of Human-Health Pharmaceuticals in Filtered Water by Chemically Modified Styrene-Divinylbenzene Resin-Based Solid-Phase Extraction and High-Performance Liquid Chromatography/Mass Spectrometry

    Science.gov (United States)

    Furlong, Edward T.; Werner, Stephen L.; Anderson, Bruce D.; Cahill, Jeffery D.

    2008-01-01

    In 1999, the Methods Research and Development Program of the U.S. Geological Survey National Water Quality Laboratory began the process of developing a method designed to identify and quantify human-health pharmaceuticals in four filtered water-sample types: reagent water, ground water, surface water minimally affected by human contributions, and surface water that contains a substantial fraction of treated wastewater. Compounds derived from human pharmaceutical and personal-care product use, which enter the environment through wastewater discharge, are a newly emerging area of concern; this method was intended to fulfill the need for a highly sensitive and highly selective means to identify and quantify 14 commonly used human pharmaceuticals in filtered-water samples. The concentrations of 12 pharmaceuticals are reported without qualification; the concentrations of two pharmaceuticals are reported as estimates because long-term reagent-spike sample recoveries fall below acceptance criteria for reporting concentrations without qualification. The method uses a chemically modified styrene-divinylbenzene resin-based solid-phase extraction (SPE) cartridge for analyte isolation and concentration. For analyte detection and quantitation, an instrumental method was developed that used a high-performance liquid chromatography/mass spectrometry (HPLC/MS) system to separate the pharmaceuticals of interest from each other and coextracted material. Immediately following separation, the pharmaceuticals are ionized by electrospray ionization operated in the positive mode, and the positive ions produced are detected, identified, and quantified using a quadrupole mass spectrometer. In this method, 1-liter water samples are first filtered, either in the field or in the laboratory, using a 0.7-micrometer (um) nominal pore size glass-fiber filter to remove suspended solids. The filtered samples then are passed through cleaned and conditioned SPE cartridges at a rate of about 15

  10. Occupational Styrene Exposure on Auditory Function Among Adults: A Systematic Review of Selected Workers

    Directory of Open Access Journals (Sweden)

    Francis T. Pleban

    2017-12-01

    Full Text Available A review study was conducted to examine the adverse effects of styrene, styrene mixtures, or styrene and/or styrene mixtures and noise on the auditory system in humans employed in occupational settings. The search included peer-reviewed articles published in English language involving human volunteers spanning a 25-year period (1990–2015. Studies included peer review journals, case–control studies, and case reports. Animal studies were excluded. An initial search identified 40 studies. After screening for inclusion, 13 studies were retrieved for full journal detail examination and review. As a whole, the results range from no to mild associations between styrene exposure and auditory dysfunction, noting relatively small sample sizes. However, four studies investigating styrene with other organic solvent mixtures and noise suggested combined exposures to both styrene organic solvent mixtures may be more ototoxic than exposure to noise alone. There is little literature examining the effect of styrene on auditory functioning in humans. Nonetheless, findings suggest public health professionals and policy makers should be made aware of the future research needs pertaining to hearing impairment and ototoxicity from styrene. It is recommended that chronic styrene-exposed individuals be routinely evaluated with a comprehensive audiological test battery to detect early signs of auditory dysfunction. Keywords: auditory system, human exposure, ototoxicity, styrene

  11. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  12. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  13. Study of different filters

    International Nuclear Information System (INIS)

    Cochinal, R.; Rouby, R.

    1959-01-01

    This note first contains a terminology related to filters and to their operation, and then proposes an overview of general characteristics of filters such as load loss with respect to gas rate, efficiency, and clogging with respect to filter pollution. It also indicates standard aerosols which are generally used, how they are dosed, and how efficiency is determined with a standard aerosol. Then, after a presentation of the filtration principle, this note reports the study of several filters: glass wool, filter papers provided by different companies, Teflon foam, English filters, Teflon wool, sintered Teflonite, quartz wool, polyvinyl chloride foam, synthetic filter, sintered bronze. The third part reports the study of some aerosol and dust separators

  14. Changing ventilation filters

    International Nuclear Information System (INIS)

    Hackney, S.

    1980-01-01

    A filter changing unit has a door which interlocks with the door of a filter chamber so as to prevent contamination of the outer surfaces of the doors by radioactive material collected on the filter element and a movable support which enables a filter chamber thereonto to be stored within the unit in such a way that the doors of the unit and the filter chamber can be replaced. The door pivots and interlocks with another door by means of a bolt, a seal around the periphery lip of the first door engages the periphery of the second door to seal the gap. A support pivots into a lower filter element storage position. Inspection windows and glove ports are provided. The unit is releasably connected to the filter chamber by bolts engaging in a flange provided around an opening. (author)

  15. Balanced microwave filters

    CERN Document Server

    Hong, Jiasheng; Medina, Francisco; Martiacuten, Ferran

    2018-01-01

    This book presents and discusses strategies for the design and implementation of common-mode suppressed balanced microwave filters, including, narrowband, wideband, and ultra-wideband filters This book examines differential-mode, or balanced, microwave filters by discussing several implementations of practical realizations of these passive components. Topics covered include selective mode suppression, designs based on distributed and semi-lumped approaches, multilayer technologies, defect ground structures, coupled resonators, metamaterials, interference techniques, and substrate integrated waveguides, among others. Divided into five parts, Balanced Microwave Filters begins with an introduction that presents the fundamentals of balanced lines, circuits, and networks. Part 2 covers balanced transmission lines with common-mode noise suppression, including several types of common-mode filters and the application of such filters to enhance common-mode suppression in balanced bandpass filters. Next, Part 3 exa...

  16. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    Science.gov (United States)

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  17. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  18. Auditory and visual evoked potentials during hyperoxia

    Science.gov (United States)

    Smith, D. B. D.; Strawbridge, P. J.

    1974-01-01

    Experimental study of the auditory and visual averaged evoked potentials (AEPs) recorded during hyperoxia, and investigation of the effect of hyperoxia on the so-called contingent negative variation (CNV). No effect of hyperoxia was found on the auditory AEP, the visual AEP, or the CNV. Comparisons with previous studies are discussed.

  19. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  20. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  1. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  3. Auditory and visual connectivity gradients in frontoparietal cortex.

    Science.gov (United States)

    Braga, Rodrigo M; Hellyer, Peter J; Wise, Richard J S; Leech, Robert

    2017-01-01

    A frontoparietal network of brain regions is often implicated in both auditory and visual information processing. Although it is possible that the same set of multimodal regions subserves both modalities, there is increasing evidence that there is a differentiation of sensory function within frontoparietal cortex. Magnetic resonance imaging (MRI) in humans was used to investigate whether different frontoparietal regions showed intrinsic biases in connectivity with visual or auditory modalities. Structural connectivity was assessed with diffusion tractography and functional connectivity was tested using functional MRI. A dorsal-ventral gradient of function was observed, where connectivity with visual cortex dominates dorsal frontal and parietal connections, while connectivity with auditory cortex dominates ventral frontal and parietal regions. A gradient was also observed along the posterior-anterior axis, although in opposite directions in prefrontal and parietal cortices. The results suggest that the location of neural activity within frontoparietal cortex may be influenced by these intrinsic biases toward visual and auditory processing. Thus, the location of activity in frontoparietal cortex may be influenced as much by stimulus modality as the cognitive demands of a task. It was concluded that stimulus modality was spatially encoded throughout frontal and parietal cortices, and was speculated that such an arrangement allows for top-down modulation of modality-specific information to occur within higher-order cortex. This could provide a potentially faster and more efficient pathway by which top-down selection between sensory modalities could occur, by constraining modulations to within frontal and parietal regions, rather than long-range connections to sensory cortices. Hum Brain Mapp 38:255-270, 2017. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  4. Dissection of the Auditory Bulla in Postnatal Mice: Isolation of the Middle Ear Bones and Histological Analysis.

    Science.gov (United States)

    Sakamoto, Ayako; Kuroda, Yukiko; Kanzaki, Sho; Matsuo, Koichi

    2017-01-04

    In most mammals, auditory ossicles in the middle ear, including the malleus, incus and stapes, are the smallest bones. In mice, a bony structure called the auditory bulla houses the ossicles, whereas the auditory capsule encloses the inner ear, namely the cochlea and semicircular canals. Murine ossicles are essential for hearing and thus of great interest to researchers in the field of otolaryngology, but their metabolism, development, and evolution are highly relevant to other fields. Altered bone metabolism can affect hearing function in adult mice, and various gene-deficient mice show changes in morphogenesis of auditory ossicles in utero. Although murine auditory ossicles are tiny, their manipulation is feasible if one understands their anatomical orientation and 3D structure. Here, we describe how to dissect the auditory bulla and capsule of postnatal mice and then isolate individual ossicles by removing part of the bulla. We also discuss how to embed the bulla and capsule in different orientations to generate paraffin or frozen sections suitable for preparation of longitudinal, horizontal, or frontal sections of the malleus. Finally, we enumerate anatomical differences between mouse and human auditory ossicles. These methods would be useful in analyzing pathological, developmental and evolutionary aspects of auditory ossicles and the middle ear in mice.

  5. Efficiency test of filtering methods for the removal of transcranial magnetic stimulation artifacts on human electroencephalography with artificially transcranial magnetic stimulation-corrupted signals

    Science.gov (United States)

    Zilber, Nicolas A.; Katayama, Yoshinori; Iramina, Keiji; Erich, Wintermantel

    2010-05-01

    A new approach is proposed to test the efficiency of methods, such as the Kalman filter and the independent component analysis (ICA), when applied to remove the artifacts induced by transcranial magnetic stimulation (TMS) from electroencephalography (EEG). By using EEG recordings corrupted by TMS induction, the shape of the artifacts is approximately described with a model based on an equivalent circuit simulation. These modeled artifacts are subsequently added to other EEG signals—this time not influenced by TMS. The resulting signals prove of interest since we also know their form without the pseudo-TMS artifacts. Therefore, they enable us to use a fit test to compare the signals we obtain after removing the artifacts with the original signals. This efficiency test turned out very useful in comparing the methods between them, as well as in determining the parameters of the filtering that give satisfactory results with the automatic ICA.

  6. Filter forensics: microbiota recovery from residential HVAC filters.

    Science.gov (United States)

    Maestre, Juan P; Jennings, Wiley; Wylie, Dennis; Horner, Sharon D; Siegel, Jeffrey; Kinney, Kerry A

    2018-01-30

    Establishing reliable methods for assessing the microbiome within the built environment is critical for understanding the impact of biological exposures on human health. High-throughput DNA sequencing of dust samples provides valuable insights into the microbiome present in human-occupied spaces. However, the effect that different sampling methods have on the microbial community recovered from dust samples is not well understood across sample types. Heating, ventilation, and air conditioning (HVAC) filters hold promise as long-term, spatially integrated, high volume samplers to characterize the airborne microbiome in homes and other climate-controlled spaces. In this study, the effect that dust recovery method (i.e., cut and elution, swabbing, or vacuuming) has on the microbial community structure, membership, and repeatability inferred by Illumina sequencing was evaluated. The results indicate that vacuum samples captured higher quantities of total, bacterial, and fungal DNA than swab or cut samples. Repeated swab and vacuum samples collected from the same filter were less variable than cut samples with respect to both quantitative DNA recovery and bacterial community structure. Vacuum samples captured substantially greater bacterial diversity than the other methods, whereas fungal diversity was similar across all three methods. Vacuum and swab samples of HVAC filter dust were repeatable and generally superior to cut samples. Nevertheless, the contribution of environmental and human sources to the bacterial and fungal communities recovered via each sampling method was generally consistent across the methods investigated. Dust recovery methodologies have been shown to affect the recovery, repeatability, structure, and membership of microbial communities recovered from dust samples in the built environment. The results of this study are directly applicable to indoor microbiota studies utilizing the filter forensics approach. More broadly, this study provides a

  7. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  8. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  9. Relating binaural pitch perception to the individual listener's auditory profile.

    Science.gov (United States)

    Santurette, Sébastien; Dau, Torsten

    2012-04-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception was linked to a specific deficit, the auditory profiles of the individual listeners were characterized using measures of loudness perception, cognitive ability, binaural processing, temporal fine structure processing, and frequency selectivity, in addition to common audiometric measures. Two of the listeners were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural pitch perception showed deficits in temporal fine structure processing. Whether the observed deficits stemmed from peripheral or central mechanisms could not be resolved here, but the present findings may be useful for hearing loss characterization.

  10. Optimal filter bandwidth for pulse oximetry

    Science.gov (United States)

    Stuban, Norbert; Niwayama, Masatsugu

    2012-10-01

    Pulse oximeters contain one or more signal filtering stages between the photodiode and microcontroller. These filters are responsible for removing the noise while retaining the useful frequency components of the signal, thus improving the signal-to-noise ratio. The corner frequencies of these filters affect not only the noise level, but also the shape of the pulse signal. Narrow filter bandwidth effectively suppresses the noise; however, at the same time, it distorts the useful signal components by decreasing the harmonic content. In this paper, we investigated the influence of the filter bandwidth on the accuracy of pulse oximeters. We used a pulse oximeter tester device to produce stable, repetitive pulse waves with digitally adjustable R ratio and heart rate. We built a pulse oximeter and attached it to the tester device. The pulse oximeter digitized the current of its photodiode directly, without any analog signal conditioning. We varied the corner frequency of the low-pass filter in the pulse oximeter in the range of 0.66-15 Hz by software. For the tester device, the R ratio was set to R = 1.00, and the R ratio deviation measured by the pulse oximeter was monitored as a function of the corner frequency of the low-pass filter. The results revealed that lowering the corner frequency of the low-pass filter did not decrease the accuracy of the oxygen level measurements. The lowest possible value of the corner frequency of the low-pass filter is the fundamental frequency of the pulse signal. We concluded that the harmonics of the pulse signal do not contribute to the accuracy of pulse oximetry. The results achieved by the pulse oximeter tester were verified by human experiments, performed on five healthy subjects. The results of the human measurements confirmed that filtering out the harmonics of the pulse signal does not degrade the accuracy of pulse oximetry.

  11. Filter material charging apparatus for filter assembly for radioactive contaminants

    International Nuclear Information System (INIS)

    Goldsmith, J.M.; O'Nan, A. Jr.

    1977-01-01

    A filter charging apparatus for a filter assembly is described. The filter assembly includes a housing with at least one filter bed therein and the filter charging apparatus for adding filter material to the filter assembly includes a tank with an opening therein, the tank opening being disposed in flow communication with opposed first and second conduit means, the first conduit means being in flow communication with the filter assembly housing and the second conduit means being in flow communication with a blower means. Upon activation of the blower means, the blower means pneumatically conveys the filter material from the tank to the filter housing

  12. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  13. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  14. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Science.gov (United States)

    Wilf, Meytal; Ramot, Michal; Furman-Haran, Edna; Arzi, Anat; Levkovitz, Yechiel; Malach, Rafael

    2016-01-01

    Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI) during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM) sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG) cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  15. Assessment of auditory cortical function in cochlear implant patients using 15O PET

    International Nuclear Information System (INIS)

    Young, J.P.; O'Sullivan, B.T.; Gibson, W.P.; Sefton, A.E.; Mitchell, T.E.; Sanli, H.; Cervantes, R.; Withall, A.; Royal Prince Alfred Hospital, Sydney,

    1998-01-01

    Full text: Cochlear implantation has been an extraordinarily successful method of restoring hearing and the potential for full language development in pre-lingually and post-lingually deaf individuals (Gibson 1996). Post-lingually deaf patients, who develop their hearing loss later in life, respond best to cochlear implantation within the first few years of their deafness, but are less responsive to implantation after several years of deafness (Gibson 1996). In pre-lingually deaf children, cochlear implantation is most effect in allowing the full development language skills when performed within a critical period, in the first 8 years of life. These clinical observations suggest considerable neural plasticity of the human auditory cortex in acquiring and retaining language skills (Gibson 1996, Buchwald 1990). Currently, electrocochleography is used to determine the integrity of the auditory pathways to the auditory cortex. However, the functional integrity of the auditory cortex cannot be determined by this method. We have defined the extent of activation of the auditory cortex and auditory association cortex in 6 normal controls and 6 cochlear implant patients using 15 O PET functional brain imaging methods. Preliminary results have indicated the potential clinical utility of 15 O PET cortical mapping in the pre-surgical assessment and post-surgical follow up of cochlear implant patients. Copyright (1998) Australian Neuroscience Society

  16. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  17. Auditory cortical function during verbal episodic memory encoding in Alzheimer's disease.

    Science.gov (United States)

    Dhanjal, Novraj S; Warren, Jane E; Patel, Maneesh C; Wise, Richard J S

    2013-02-01

    Episodic memory encoding of a verbal message depends upon initial registration, which requires sustained auditory attention followed by deep semantic processing of the message. Motivated by previous data demonstrating modulation of auditory cortical activity during sustained attention to auditory stimuli, we investigated the response of the human auditory cortex during encoding of sentences to episodic memory. Subsequently, we investigated this response in patients with mild cognitive impairment (MCI) and probable Alzheimer's disease (pAD). Using functional magnetic resonance imaging, 31 healthy participants were studied. The response in 18 MCI and 18 pAD patients was then determined, and compared to 18 matched healthy controls. Subjects heard factual sentences, and subsequent retrieval performance indicated successful registration and episodic encoding. The healthy subjects demonstrated that suppression of auditory cortical responses was related to greater success in encoding heard sentences; and that this was also associated with greater activity in the semantic system. In contrast, there was reduced auditory cortical suppression in patients with MCI, and absence of suppression in pAD. Administration of a central cholinesterase inhibitor (ChI) partially restored the suppression in patients with pAD, and this was associated with an improvement in verbal memory. Verbal episodic memory impairment in AD is associated with altered auditory cortical function, reversible with a ChI. Although these results may indicate the direct influence of pathology in auditory cortex, they are also likely to indicate a partially reversible impairment of feedback from neocortical systems responsible for sustained attention and semantic processing. Copyright © 2012 American Neurological Association.

  18. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, whereas the MMN amplitude increased as a function of deviance magnitude. These findings indicated that acoustic regularity can be encoded at the level of the middle-latency response but that it takes a higher step in the auditory hierarchy for deviance magnitude to be encoded, thus providing a functional dissociation between regularity encoding and deviance detection along the auditory hierarchy. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  19. Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.

    Science.gov (United States)

    Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W

    2011-03-08

    How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.

  20. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔF TONE , TONE condition) but also in the amplitude modulation rate ("AM cue": ΔF AM , AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔF AM and ΔF TONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  1. Fast bilateral filtering of CT-images

    Energy Technology Data Exchange (ETDEWEB)

    Steckmann, Sven; Baer, Matthias; Kachelriess, Marc [Erlangen-Nuernberg Univ., Erlangen (Germany). Inst. of Medical Physics (IMP)

    2011-07-01

    The Bilateral filter is able to get a lower noise level while retaining the edges in images. The downside of a bilateral filter is the high order of the problem itself. While having a Volume size of N with a dimension of d and a filter window of r the problem is of size N{sup d} . r{sup d}. In the literature there are some proposals for speeding up by reducing this order by approximating a component of the filter. This leads to inaccurate results which often implies non acceptable artifacts for medical imaging. A better way for medical imaging is to speed up the filter itself while leaving the basic structure intact. This is the way our implementation uses. We solve the problem of calculating the function of e{sup -x} in an efficient way on modern architectures, and the problem of vectorizing the filtering process. As result we implemented a filter which is 2.5 times faster than the highly optimized basic approach. By comparing the basic analytical approach with the final algorithm, the differences in quality of the computing process is negligible to the human eye. We are able to process a volume with 512{sup 3} voxels with a filter of 25 x 25 x 1 in 21 s on a modern Intel Xeon platform with two X5590 processors running at 3.33 GHz. (orig.)

  2. Method for Dissecting the Auditory Epithelium (Basilar Papilla) in Developing Chick Embryos.

    Science.gov (United States)

    Levic, Snezana; Yamoah, Ebenezer N

    2016-01-01

    Chickens are an invaluable model for exploring auditory physiology. Similar to humans, the chicken inner ear is morphologically and functionally close to maturity at the time of hatching. In contrast, chicks can regenerate hearing, an ability lost in all mammals, including humans. The extensive morphological, physiological, behavioral, and pharmacological data available, regarding normal development in the chicken auditory system, has driven the progress of the field. The basilar papilla is an attractive model system to study the developmental mechanisms of hearing. Here, we describe the dissection technique for isolating the basilar papilla in developing chick inner ear. We also provide detailed examples of physiological (patch clamping) experiments using this preparation.

  3. Concentric Split Flow Filter

    Science.gov (United States)

    Stapleton, Thomas J. (Inventor)

    2015-01-01

    A concentric split flow filter may be configured to remove odor and/or bacteria from pumped air used to collect urine and fecal waste products. For instance, filter may be designed to effectively fill the volume that was previously considered wasted surrounding the transport tube of a waste management system. The concentric split flow filter may be configured to split the air flow, with substantially half of the air flow to be treated traveling through a first bed of filter media and substantially the other half of the air flow to be treated traveling through the second bed of filter media. This split flow design reduces the air velocity by 50%. In this way, the pressure drop of filter may be reduced by as much as a factor of 4 as compare to the conventional design.

  4. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  5. Hybrid Filter Membrane

    Science.gov (United States)

    Laicer, Castro; Rasimick, Brian; Green, Zachary

    2012-01-01

    Cabin environmental control is an important issue for a successful Moon mission. Due to the unique environment of the Moon, lunar dust control is one of the main problems that significantly diminishes the air quality inside spacecraft cabins. Therefore, this innovation was motivated by NASA s need to minimize the negative health impact that air-suspended lunar dust particles have on astronauts in spacecraft cabins. It is based on fabrication of a hybrid filter comprising nanofiber nonwoven layers coated on porous polymer membranes with uniform cylindrical pores. This design results in a high-efficiency gas particulate filter with low pressure drop and the ability to be easily regenerated to restore filtration performance. A hybrid filter was developed consisting of a porous membrane with uniform, micron-sized, cylindrical pore channels coated with a thin nanofiber layer. Compared to conventional filter media such as a high-efficiency particulate air (HEPA) filter, this filter is designed to provide high particle efficiency, low pressure drop, and the ability to be regenerated. These membranes have well-defined micron-sized pores and can be used independently as air filters with discreet particle size cut-off, or coated with nanofiber layers for filtration of ultrafine nanoscale particles. The filter consists of a thin design intended to facilitate filter regeneration by localized air pulsing. The two main features of this invention are the concept of combining a micro-engineered straight-pore membrane with nanofibers. The micro-engineered straight pore membrane can be prepared with extremely high precision. Because the resulting membrane pores are straight and not tortuous like those found in conventional filters, the pressure drop across the filter is significantly reduced. The nanofiber layer is applied as a very thin coating to enhance filtration efficiency for fine nanoscale particles. Additionally, the thin nanofiber coating is designed to promote capture of

  6. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  7. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  8. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  9. Backflushable filter insert

    International Nuclear Information System (INIS)

    Keith, R.C.; Vandenberg, T.; Randolph, M.C.; Lewis, T.B.; Gillis, P.J. Jr.

    1988-01-01

    Filter elements are mounted on a tube plate beneath an accumulator chamber whose wall is extended by skirt and flange to form a closure for the top of pressure vessel. The accumulator chamber is annular around a central pipe which serves as the outlet for filtered water passing from the filter elements. The chamber contains filtered compressed air from supply. Periodically the filtration of water is stopped and vessel is drained. Then a valve is opened, allowing the accumulated air to flow from chamber up a pipe and down pipe, pushing the filtered water from pipe back through the filter elements to clean them. The accumulator chamber is so proportioned, relative to the volume of the system communicating therewith during backflushing, that the equilibrium pressure during backflushing cannot exceed the pressure rating of the vessel. However a line monitors the pressure at the top of the vessel, and if it rises too far a bleed valve is automatically opened to depressurise the system. The chamber is intended to replace the lid of an existing vessel to convert a filter using filter aid to one using permanent filter elements. (author)

  10. Updating the OMERACT filter

    DEFF Research Database (Denmark)

    Wells, George; Beaton, Dorcas E; Tugwell, Peter

    2014-01-01

    The "Discrimination" part of the OMERACT Filter asks whether a measure discriminates between situations that are of interest. "Feasibility" in the OMERACT Filter encompasses the practical considerations of using an instrument, including its ease of use, time to complete, monetary costs......, and interpretability of the question(s) included in the instrument. Both the Discrimination and Reliability parts of the filter have been helpful but were agreed on primarily by consensus of OMERACT participants rather than through explicit evidence-based guidelines. In Filter 2.0 we wanted to improve this definition...

  11. Nanofiber Filters Eliminate Contaminants

    Science.gov (United States)

    2009-01-01

    With support from Phase I and II SBIR funding from Johnson Space Center, Argonide Corporation of Sanford, Florida tested and developed its proprietary nanofiber water filter media. Capable of removing more than 99.99 percent of dangerous particles like bacteria, viruses, and parasites, the media was incorporated into the company's commercial NanoCeram water filter, an inductee into the Space Foundation's Space Technology Hall of Fame. In addition to its drinking water filters, Argonide now produces large-scale nanofiber filters used as part of the reverse osmosis process for industrial water purification.

  12. Filters in nuclear facilities

    International Nuclear Information System (INIS)

    Berg, K.H.; Wilhelm, J.G.

    1985-01-01

    The topics of the nine papers given include the behavior of HEPA filters during exposure to air flows of high humidity as well as of high differential pressure, the development of steel-fiber filters suitable for extreme operating conditions, and the occurrence of various radioactive iodine species in the exhaust air from boiling water reactors. In an introductory presentation the German view of the performance requirements to be met by filters in nuclear facilities as well as the present status of filter quality assurance are discussed. (orig.) [de

  13. Washing method of filter

    International Nuclear Information System (INIS)

    Izumidani, Masakiyo; Tanno, Kazuo.

    1978-01-01

    Purpose: To enable automatic filter operation and facilitate back-washing operation by back-washing filters used in a bwr nuclear power plant utilizing an exhaust gas from a ventilator or air conditioner. Method: Exhaust gas from an exhaust pipe of an ventilator or air conditioner is pressurized in a compressor and then introduced in a back-washing gas tank. Then, the exhaust gas pressurized to a predetermined pressure is blown from the inside to the outside of a filter to thereby separate impurities collected on the filter elements and introduce them to a waste tank. (Furukawa, Y.)

  14. Automatic phoneme category selectivity in the dorsal auditory stream.

    Science.gov (United States)

    Chevillet, Mark A; Jiang, Xiong; Rauschecker, Josef P; Riesenhuber, Maximilian

    2013-03-20

    Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.

  15. Neural correlates of auditory temporal predictions during sensorimotor synchronization

    Directory of Open Access Journals (Sweden)

    Nadine ePecenka

    2013-08-01

    Full Text Available Musical ensemble performance requires temporally precise interpersonal action coordination. To play in synchrony, ensemble musicians presumably rely on anticipatory mechanisms that enable them to predict the timing of sounds produced by co-performers. Previous studies have shown that individuals differ in their ability to predict upcoming tempo changes in paced finger-tapping tasks (indexed by cross-correlations between tap timing and pacing events and that the degree of such prediction influences the accuracy of sensorimotor synchronization (SMS and interpersonal coordination in dyadic tapping tasks. The current functional magnetic resonance imaging study investigated the neural correlates of auditory temporal predictions during SMS in a within-subject design. Hemodynamic responses were recorded from 18 musicians while they tapped in synchrony with auditory sequences containing gradual tempo changes under conditions of varying cognitive load (achieved by a simultaneous visual n-back working-memory task comprising three levels of difficulty: observation only, 1-back, and 2-back object comparisons. Prediction ability during SMS decreased with increasing cognitive load. Results of a parametric analysis revealed that the generation of auditory temporal predictions during SMS recruits (1 a distributed network in cortico-cerebellar motor-related brain areas (left dorsal premotor and motor cortex, right lateral cerebellum, SMA proper and bilateral inferior parietal cortex and (2 medial cortical areas (medial prefrontal cortex, posterior cingulate cortex. While the first network is presumably involved in basic sensory prediction, sensorimotor integration, motor timing, and temporal adaptation, activation in the second set of areas may be related to higher-level social-cognitive processes elicited during action coordination with auditory signals that resemble music performed by human agents.

  16. Dissociation of Detection and Discrimination of Pure Tones following Bilateral Lesions of Auditory Cortex

    Science.gov (United States)

    Dykstra, Andrew R.; Koh, Christine K.; Braida, Louis D.; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5±2.1 dB in the left ear and 6.5±1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6±0.22 dB; right ear: 1.7±0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed. PMID:22957087

  17. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Science.gov (United States)

    Dykstra, Andrew R; Koh, Christine K; Braida, Louis D; Tramo, Mark Jude

    2012-01-01

    It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB). The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  18. Dissociation of detection and discrimination of pure tones following bilateral lesions of auditory cortex.

    Directory of Open Access Journals (Sweden)

    Andrew R Dykstra

    Full Text Available It is well known that damage to the peripheral auditory system causes deficits in tone detection as well as pitch and loudness perception across a wide range of frequencies. However, the extent to which to which the auditory cortex plays a critical role in these basic aspects of spectral processing, especially with regard to speech, music, and environmental sound perception, remains unclear. Recent experiments indicate that primary auditory cortex is necessary for the normally-high perceptual acuity exhibited by humans in pure-tone frequency discrimination. The present study assessed whether the auditory cortex plays a similar role in the intensity domain and contrasted its contribution to sensory versus discriminative aspects of intensity processing. We measured intensity thresholds for pure-tone detection and pure-tone loudness discrimination in a population of healthy adults and a middle-aged man with complete or near-complete lesions of the auditory cortex bilaterally. Detection thresholds in his left and right ears were 16 and 7 dB HL, respectively, within clinically-defined normal limits. In contrast, the intensity threshold for monaural loudness discrimination at 1 kHz was 6.5 ± 2.1 dB in the left ear and 6.5 ± 1.9 dB in the right ear at 40 dB sensation level, well above the means of the control population (left ear: 1.6 ± 0.22 dB; right ear: 1.7 ± 0.19 dB. The results indicate that auditory cortex lowers just-noticeable differences for loudness discrimination by approximately 5 dB but is not necessary for tone detection in quiet. Previous human and Old-world monkey experiments employing lesion-effect, neurophysiology, and neuroimaging methods to investigate the role of auditory cortex in intensity processing are reviewed.

  19. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  20. Assessment of thermal effects in a model of the human head implanted with a wireless active microvalve for the treatment of glaucoma creating a filtering bleb

    Science.gov (United States)

    Schaumburg, F.; Guarnieri, F. A.

    2017-05-01

    A 3D anatomical computational model is developed to assess thermal effects due to exposure to the electromagnetic field required to power a new investigational active implantable microvalve for the treatment of glaucoma. Such a device, located in the temporal superior eye quadrant, produces a filtering bleb, which is included in the geometry of the model, together with the relevant ocular structures. The electromagnetic field source—a planar coil—as well as the microvalve antenna and casing are also included. Exposure to the electromagnetic field source of an implanted and a non-implanted subject are simulated by solving a magnetic potential formulation, using the finite element method. The maximum SAR10 is reached in the eyebrow and remains within the limits suggested by the IEEE and ICNIRP standards. The anterior chamber, filtering bleb, iris and ciliary body are the ocular structures where more absorption occurs. The temperature rise distribution is also obtained by solving the bioheat equation with the finite element method. The numerical results are compared with the in vivo measurements obtained from four rabbits implanted with the microvalve and exposed to the electromagnetic field source.

  1. Auditory ossicles from southwest Asian Mousterian sites.

    Science.gov (United States)

    Quam, Rolf; Rak, Yoel

    2008-03-01

    The present study describes and analyzes new Neandertal and early modern human auditory ossicles from the sites of Qafzeh and Amud in southwest Asia. Some methodological issues in the measurement of these bones are considered, and a set of standardized measurement protocols is proposed. Evidence of erosive pathological processes, most likely attributed to otitis media, is present on the ossicles of Qafzeh 12 and Amud 7 but none can be detected in the other Qafzeh specimens. Qafzeh 12 and 15 extend the known range of variation in the fossil H. sapiens sample in some metric variables, but morphologically, the new specimens do not differ in any meaningful way from living humans. In most metric dimensions, the Amud 7 incus falls within our modern human range of variation, but the more closed angle between the short and long processes stands out. Morphologically, all the Neandertal incudi described to date show a very straight long process. Several tentative hypotheses can be suggested regarding the evolution of the ear ossicles in the genus Homo. First, the degree of metric and morphological variation seems greater among the fossil H. sapiens sample than in Neandertals. Second, there is a real difference in the size of the malleus between Neandertals and fossil H. sapiens, with Neandertals showing larger values in most dimensions. Third, the wider malleus head implies a larger articular facet in the Neandertals, and this also appears to be reflected in the larger (taller) incus articular facet. Fourth, there is limited evidence for a potential temporal trend toward reduction of the long process within the Neandertal lineage. Fifth, a combination of features in the malleus, incus, and stapes may indicate a slightly different relative positioning of either the tip of the incus long process or stapes footplate within the tympanic cavity in the Neandertal lineage.

  2. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  3. Particle Filter Tracking without Dynamics

    Directory of Open Access Journals (Sweden)

    Jaime Ortegon-Aguilar

    2007-01-01

    Full Text Available People tracking is an interesting topic in computer vision. It has applications in industrial areas such as surveillance or human-machine interaction. Particle Filters is a common algorithm for people tracking; challenging situations occur when the target's motion is poorly modelled or with unexpected motions. In this paper, an alternative to address people tracking is presented. The proposed algorithm is based in particle filters, but instead of using a dynamical model, it uses background subtraction to predict future locations of particles. The algorithm is able to track people in omnidirectional sequences with a low frame rate (one or two frames per second. Our approach can tackle unexpected discontinuities and changes in the direction of the motion. The main goal of the paper is to track people from laboratories, but it has applications in surveillance, mainly in controlled environments.

  4. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  5. Psychophysical evidence for auditory motion parallax.

    Science.gov (United States)

    Genzel, Daria; Schutte, Michael; Brimijoin, W Owen; MacNeilage, Paul R; Wiegrebe, Lutz

    2018-04-17

    Distance is important: From an ecological perspective, knowledge about the distance to either prey or predator is vital. However, the distance of an unknown sound source is particularly difficult to assess, especially in anechoic environments. In vision, changes in perspective resulting from observer motion produce a reliable, consistent, and unambiguous impression of depth known as motion parallax. Here we demonstrate with formal psychophysics that humans can exploit auditory motion parallax, i.e., the change in the dynamic binaural cues elicited by self-motion, to assess the relative depths of two sound sources. Our data show that sensitivity to relative depth is best when subjects move actively; performance deteriorates when subjects are moved by a motion platform or when the sound sources themselves move. This is true even though the dynamic binaural cues elicited by these three types of motion are identical. Our data demonstrate a perceptual strategy to segregate intermittent sound sources in depth and highlight the tight interaction between self-motion and binaural processing that allows assessment of the spatial layout of complex acoustic scenes.

  6. No auditory experience, no tinnitus: Lessons from subjects with congenital- and acquired single-sided deafness.

    Science.gov (United States)

    Lee, Sang-Yeon; Nam, Dong Woo; Koo, Ja-Won; De Ridder, Dirk; Vanneste, Sven; Song, Jae-Jin

    2017-10-01

    Recent studies have adopted the Bayesian brain model to explain the generation of tinnitus in subjects with auditory deafferentation. That is, as the human brain works in a Bayesian manner to reduce environmental uncertainty, missing auditory information due to hearing loss may cause auditory phantom percepts, i.e., tinnitus. This type of deafferentation-induced auditory phantom percept should be preceded by auditory experience because the fill-in phenomenon, namely tinnitus, is based upon auditory prediction and the resultant prediction error. For example, a recent animal study observed the absence of tinnitus in cats with congenital single-sided deafness (SSD; Eggermont and Kral, Hear Res 2016). However, no human studies have investigated the presence and characteristics of tinnitus in subjects with congenital SSD. Thus, the present study sought to reveal differences in the generation of tinnitus between subjects with congenital SSD and those with acquired SSD to evaluate the replicability of previous animal studies. This study enrolled 20 subjects with congenital SSD and 44 subjects with acquired SSD and examined the presence and characteristics of tinnitus in the groups. None of the 20 subjects with congenital SSD perceived tinnitus on the affected side, whereas 30 of 44 subjects with acquired SSD experienced tinnitus on the affected side. Additionally, there were significant positive correlations between tinnitus characteristics and the audiometric characteristics of the SSD. In accordance with the findings of the recent animal study, tinnitus was absent in subjects with congenital SSD, but relatively frequent in subjects with acquired SSD, which suggests that the development of tinnitus should be preceded by auditory experience. In other words, subjects with profound congenital peripheral deafferentation do not develop auditory phantom percepts because no auditory predictions are available from the Bayesian brain. Copyright © 2017 Elsevier B.V. All rights