WorldWideScience

Sample records for human auditory filters

  1. Sparse gammatone signal model optimized for English speech does not match the human auditory filters.

    Science.gov (United States)

    Strahl, Stefan; Mertins, Alfred

    2008-07-18

    Evidence that neurosensory systems use sparse signal representations as well as improved performance of signal processing algorithms using sparse signal models raised interest in sparse signal coding in the last years. For natural audio signals like speech and environmental sounds, gammatone atoms have been derived as expansion functions that generate a nearly optimal sparse signal model (Smith, E., Lewicki, M., 2006. Efficient auditory coding. Nature 439, 978-982). Furthermore, gammatone functions are established models for the human auditory filters. Thus far, a practical application of a sparse gammatone signal model has been prevented by the fact that deriving the sparsest representation is, in general, computationally intractable. In this paper, we applied an accelerated version of the matching pursuit algorithm for gammatone dictionaries allowing real-time and large data set applications. We show that a sparse signal model in general has advantages in audio coding and that a sparse gammatone signal model encodes speech more efficiently in terms of sparseness than a sparse modified discrete cosine transform (MDCT) signal model. We also show that the optimal gammatone parameters derived for English speech do not match the human auditory filters, suggesting for signal processing applications to derive the parameters individually for each applied signal class instead of using psychometrically derived parameters. For brain research, it means that care should be taken with directly transferring findings of optimality for technical to biological systems.

  2. Representation of auditory-filter phase characteristics in the cortex of human listeners

    DEFF Research Database (Denmark)

    Rupp, A.; Sieroka, N.; Gutschalk, A.;

    2008-01-01

    , which differently affect the flat envelopes of the Schroeder-phase maskers. We examined the influence of auditory-filter phase characteristics on the neural representation in the auditory cortex by investigating cortical auditory evoked fields ( AEFs). We found that the P1m component exhibited larger...... amplitudes when a long-duration tone was presented in a repeating linearly downward sweeping ( Schroeder positive, or m(+)) masker than in a repeating linearly upward sweeping ( Schroeder negative, or m(-)) masker. We also examined the neural representation of short-duration tone pulses presented...... at different temporal positions within a single period of three maskers differing in their component phases ( m(+), m(-), and sine phase m(0)). The P1m amplitude varied with the position of the tone pulse in the masker and depended strongly on the masker waveform. The neuromagnetic results in all cases were...

  3. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results......-ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...

  4. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    -ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...... was found to steadily decrease with decreasing center frequency. Although the observed decrease in filter bandwidth with decreasing center frequency was only approximately monotonic, the preliminary data indicates the filter bandwidth does not stabilize around 100 Hz, e.g. it still decreases below...

  5. Low power adder based auditory filter architecture.

    Science.gov (United States)

    Rahiman, P F Khaleelur; Jayanthi, V S

    2014-01-01

    Cochlea devices are powered up with the help of batteries and they should possess long working life to avoid replacing of devices at regular interval of years. Hence the devices with low power consumptions are required. In cochlea devices there are numerous filters, each responsible for frequency variant signals, which helps in identifying speech signals of different audible range. In this paper, multiplierless lookup table (LUT) based auditory filter is implemented. Power aware adder architectures are utilized to add the output samples of the LUT, available at every clock cycle. The design is developed and modeled using Verilog HDL, simulated using Mentor Graphics Model-Sim Simulator, and synthesized using Synopsys Design Compiler tool. The design was mapped to TSMC 65 nm technological node. The standard ASIC design methodology has been adapted to carry out the power analysis. The proposed FIR filter architecture has reduced the leakage power by 15% and increased its performance by 2.76%.

  6. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    R. F. Lyon

    2007-12-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as “design curves” for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a “missing link” between physiological, electrical, and mechanical models for auditory filtering.

  7. Practical Gammatone-Like Filters for Auditory Processing

    Directory of Open Access Journals (Sweden)

    Lyon RF

    2007-01-01

    Full Text Available This paper deals with continuous-time filter transfer functions that resemble tuning curves at particular set of places on the basilar membrane of the biological cochlea and that are suitable for practical VLSI implementations. The resulting filters can be used in a filterbank architecture to realize cochlea implants or auditory processors of increased biorealism. To put the reader into context, the paper starts with a short review on the gammatone filter and then exposes two of its variants, namely, the differentiated all-pole gammatone filter (DAPGF and one-zero gammatone filter (OZGF, filter responses that provide a robust foundation for modeling cochlea transfer functions. The DAPGF and OZGF responses are attractive because they exhibit certain characteristics suitable for modeling a variety of auditory data: level-dependent gain, linear tail for frequencies well below the center frequency, asymmetry, and so forth. In addition, their form suggests their implementation by means of cascades of N identical two-pole systems which render them as excellent candidates for efficient analog or digital VLSI realizations. We provide results that shed light on their characteristics and attributes and which can also serve as "design curves" for fitting these responses to frequency-domain physiological data. The DAPGF and OZGF responses are essentially a "missing link" between physiological, electrical, and mechanical models for auditory filtering.

  8. The effect of compression on tuning estimates in a simple nonlinear auditory filter model

    DEFF Research Database (Denmark)

    Marschall, Marton; MacDonald, Ewen; Dau, Torsten

    2013-01-01

    , there is evidence that human frequency-selectivity estimates depend on whether an iso-input or an iso-response measurement paradigm is used (Eustaquio-Martin et al., 2011). This study presents simulated tuning estimates using a simple compressive auditory filter model, the bandpass nonlinearity (BPNL), which......, then compression alone may explain a large part of the behaviorally observed differences in tuning between simultaneous and forward-masking conditions....

  9. Natural auditory scene statistics shapes human spatial hearing.

    Science.gov (United States)

    Parise, Cesare V; Knorre, Katharina; Ernst, Marc O

    2014-04-22

    Human perception, cognition, and action are laced with seemingly arbitrary mappings. In particular, sound has a strong spatial connotation: Sounds are high and low, melodies rise and fall, and pitch systematically biases perceived sound elevation. The origins of such mappings are unknown. Are they the result of physiological constraints, do they reflect natural environmental statistics, or are they truly arbitrary? We recorded natural sounds from the environment, analyzed the elevation-dependent filtering of the outer ear, and measured frequency-dependent biases in human sound localization. We find that auditory scene statistics reveals a clear mapping between frequency and elevation. Perhaps more interestingly, this natural statistical mapping is tightly mirrored in both ear-filtering properties and in perceived sound location. This suggests that both sound localization behavior and ear anatomy are fine-tuned to the statistics of natural auditory scenes, likely providing the basis for the spatial connotation of human hearing.

  10. Mapping tonotopy in human auditory cortex

    NARCIS (Netherlands)

    van Dijk, Pim; Langers, Dave R M; Moore, BCJ; Patterson, RD; Winter, IM; Carlyon, RP; Gockel, HE

    2013-01-01

    Tonotopy is arguably the most prominent organizational principle in the auditory pathway. Nevertheless, the layout of tonotopic maps in humans is still debated. We present neuroimaging data that robustly identify multiple tonotopic maps in the bilateral auditory cortex. In contrast with some earlier

  11. Phonological Processing In Human Auditory Cortical Fields

    Directory of Open Access Journals (Sweden)

    David L Woods

    2011-04-01

    Full Text Available We used population-based cortical-surface analysis of functional magnetic imaging (fMRI data to characterize the processing of consonant-vowel-consonant syllables (CVCs and spectrally-matched amplitude-modulated noise bursts (AMNBs in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1 Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1 and rostral field (R preferring AMNBs and lateral regions preferring CVCs. (2 Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that different ACFs are specialized for the automatic processing of different spectrotemporal sound features.

  12. Functional properties of human auditory cortical fields

    Directory of Open Access Journals (Sweden)

    David L Woods

    2010-12-01

    Full Text Available While auditory cortex in non-human primates has been subdivided into multiple functionally-specialized auditory cortical fields (ACFs, the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and nonattended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to nonattended sounds. Three centrally-located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally-defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.

  13. A computational model of human auditory signal processing and perception

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Ewert, Stephan D.; Dau, Torsten

    2008-01-01

    A model of computational auditory signal-processing and perception that accounts for various aspects of simultaneous and nonsimultaneous masking in human listeners is presented. The model is based on the modulation filterbank model described by Dau et al. [J. Acoust. Soc. Am. 102, 2892 (1997......)] but includes major changes at the peripheral and more central stages of processing. The model contains outer- and middle-ear transformations, a nonlinear basilar-membrane processing stage, a hair-cell transduction stage, a squaring expansion, an adaptation stage, a 150-Hz lowpass modulation filter, a bandpass...

  14. Auditory performances of a 3-4-7 programmable numeric filter hearing aid.

    Science.gov (United States)

    Chouard, C H; Ouayoun, M; Meyer, B; Coudert, C; Sequeville, T; Bachelot, G; Génin, J

    1997-01-01

    We designed a non-portable prototype seven-filter digital auditory hearing aid. For each of the filters, frequency bandwidth, amplification and compression were programmable in order to adapt these parameters to the deaf patient's audiometric characteristics. We compared the hearing improvement it was possible to obtain either with the three-analogue filter auditory prosthesis Triton 3004 hearing aid from Siemens or with our prototype as a function of the number of filters (three, four or seven) and their frequency bandwidth programmability. We tested 21 patients suffering from moderate to severe sensorineural hearing loss. This study allowed us to demonstrate that a seven programmable-width filter strategy seems to be more effective than the present analogue T004 device. Further studies with improvement of our prototype and finer audiometric adjustment of filter strategies, together with long-term clinical studies, need to be carried out.

  15. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  16. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    OpenAIRE

    Ranganadh Narayanam*

    2015-01-01

    The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the resi...

  17. Auditory coding of human movement kinematics.

    Science.gov (United States)

    Vinken, Pia M; Kröger, Daniela; Fehse, Ursula; Schmitz, Gerd; Brock, Heike; Effenberg, Alfred O

    2013-01-01

    Although visual perception is dominant on motor perception, control and learning, auditory information can enhance and modulate perceptual as well as motor processes in a multifaceted manner. During last decades new methods of auditory augmentation had been developed with movement sonification as one of the most recent approaches expanding auditory movement information also to usually mute phases of movement. Despite general evidence on the effectiveness of movement sonification in different fields of applied research there is nearly no empirical proof on how sonification of gross motor human movement should be configured to achieve information rich sound sequences. Such lack of empirical proof is given for (a) the selection of suitable movement features as well as for (b) effective kinetic-acoustical mapping patterns and for (c) the number of regarded dimensions of sonification. In this study we explore the informational content of artificial acoustical kinematics in terms of a kinematic movement sonification using an intermodal discrimination paradigm. In a repeated measure design we analysed discrimination rates of six everyday upper limb actions to evaluate the effectiveness of seven different kinds of kinematic-acoustical mappings as well as short-term learning effects. The kinematics of the upper limb actions were calculated based on inertial motion sensor data and transformed into seven different sonifications. Sound sequences were randomly presented to participants and discrimination rates as well as confidence of choice were analysed. Data indicate an instantaneous comprehensibility of the artificial movement acoustics as well as short-term learning effects. No differences between different dimensional encodings became evident thus indicating a high efficiency for intermodal pattern discrimination for the acoustically coded velocity distribution of the actions. Taken together movement information related to continuous kinematic parameters can be

  18. Perceptual Wavelet packet transform based Wavelet Filter Banks Modeling of Human Auditory system for improving the intelligibility of voiced and unvoiced speech: A Case Study of a system development

    Directory of Open Access Journals (Sweden)

    Ranganadh Narayanam

    2015-10-01

    Full Text Available The objective of this project is to discuss a versatile speech enhancement method based on the human auditory model. In this project a speech enhancement scheme is being described which meets the demand for quality noise reduction algorithms which are capable of operating at a very low signal to noise ratio. We will be discussing how proposed speech enhancement system is capable of reducing noise with little speech degradation in diverse noise environments. In this model to reduce the residual noise and improve the intelligibility of speech a psychoacoustic model is incorporated into the generalized perceptual wavelet denoising method to reduce the residual noise. This is a generalized time frequency subtraction algorithm which advantageously exploits the wavelet multirate signal representation to preserve the critical transient information. Simultaneous masking and temporal masking of the human auditory system are modeled by the perceptual wavelet packet transform via the frequency and temporal localization of speech components. To calculate the bark spreading energy and temporal spreading energy the wavelet coefficients are used from which a time frequency masking threshold is deduced to adaptively adjust the subtraction parameters of the discussed method. To increase the intelligibility of speech an unvoiced speech enhancement algorithm also integrated into the system.

  19. Biomedical Simulation Models of Human Auditory Processes

    Science.gov (United States)

    Bicak, Mehmet M. A.

    2012-01-01

    Detailed acoustic engineering models that explore noise propagation mechanisms associated with noise attenuation and transmission paths created when using hearing protectors such as earplugs and headsets in high noise environments. Biomedical finite element (FE) models are developed based on volume Computed Tomography scan data which provides explicit external ear, ear canal, middle ear ossicular bones and cochlea geometry. Results from these studies have enabled a greater understanding of hearing protector to flesh dynamics as well as prioritizing noise propagation mechanisms. Prioritization of noise mechanisms can form an essential framework for exploration of new design principles and methods in both earplug and earcup applications. These models are currently being used in development of a novel hearing protection evaluation system that can provide experimentally correlated psychoacoustic noise attenuation. Moreover, these FE models can be used to simulate the effects of blast related impulse noise on human auditory mechanisms and brain tissue.

  20. Tonotopic organization of human auditory association cortex.

    Science.gov (United States)

    Cansino, S; Williamson, S J; Karron, D

    1994-11-07

    Neuromagnetic studies of responses in human auditory association cortex for tone burst stimuli provide evidence for a tonotopic organization. The magnetic source image for the 100 ms component evoked by the onset of a tone is qualitatively similar to that of primary cortex, with responses lying deeper beneath the scalp for progressively higher tone frequencies. However, the tonotopic sequence of association cortex in three subjects is found largely within the superior temporal sulcus, although in the right hemisphere of one subject some sources may be closer to the inferior temporal sulcus. The locus of responses for individual subjects suggests a progression across the cortical surface that is approximately proportional to the logarithm of the tone frequency, as observed previously for primary cortex, with the span of 10 mm for each decade in frequency being comparable for the two areas.

  1. Human Auditory Processing: Insights from Cortical Event-related Potentials

    Directory of Open Access Journals (Sweden)

    Alexandra P. Key

    2016-04-01

    Full Text Available Human communication and language skills rely heavily on the ability to detect and process auditory inputs. This paper reviews possible applications of the event-related potential (ERP technique to the study of cortical mechanisms supporting human auditory processing, including speech stimuli. Following a brief introduction to the ERP methodology, the remaining sections focus on demonstrating how ERPs can be used in humans to address research questions related to cortical organization, maturation and plasticity, as well as the effects of sensory deprivation, and multisensory interactions. The review is intended to serve as a primer for researchers interested in using ERPs for the study of the human auditory system.

  2. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a bas

  3. Functional sex differences in human primary auditory cortex

    NARCIS (Netherlands)

    Ruytjens, Liesbet; Georgiadis, Janniko R.; Holstege, Gert; Wit, Hero P.; Albers, Frans W. J.; Willemsen, Antoon T. M.

    2007-01-01

    Background We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a

  4. Auditory peripersonal space in humans: a case of auditory-tactile extinction.

    Science.gov (United States)

    Làdavas, E; Pavani, F; Farnè, A

    2001-01-01

    Animal experiments have shown that the spatial correspondence between auditory and tactile receptive fields of ventral pre-motor neurons provides a map of auditory peripersonal space around the head. This allows neurons to localize a near sound with respect to the head. In the present study, we demonstrated the existence of an auditory peripersonal space around the head in humans. In a right-brain damaged patient with tactile extinction, a sound delivered near the ipsilesional side of the head extinguished a tactile stimulus delivered to the contralesional side of the head (cross-modal auditory-tactile extinction). In contrast, when an auditory stimulus was presented far from the head, cross-modal extinction was dramatically reduced. This spatially specific cross-modal extinction was found only when a complex sound like a white noise burst was presented; pure tones did not produce spatially specific cross-modal extinction. These results show a high degree of functional similarity between the characteristics of the auditory peripersonal space representation in humans and monkeys. This similarity suggests that analogous physiological substrates might be responsible for coding this multisensory integrated representation of peripersonal space in human and non-human primates.

  5. Inhibition in the Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Koji Inui

    Full Text Available Despite their indispensable roles in sensory processing, little is known about inhibitory interneurons in humans. Inhibitory postsynaptic potentials cannot be recorded non-invasively, at least in a pure form, in humans. We herein sought to clarify whether prepulse inhibition (PPI in the auditory cortex reflected inhibition via interneurons using magnetoencephalography. An abrupt increase in sound pressure by 10 dB in a continuous sound was used to evoke the test response, and PPI was observed by inserting a weak (5 dB increase for 1 ms prepulse. The time course of the inhibition evaluated by prepulses presented at 10-800 ms before the test stimulus showed at least two temporally distinct inhibitions peaking at approximately 20-60 and 600 ms that presumably reflected IPSPs by fast spiking, parvalbumin-positive cells and somatostatin-positive, Martinotti cells, respectively. In another experiment, we confirmed that the degree of the inhibition depended on the strength of the prepulse, but not on the amplitude of the prepulse-evoked cortical response, indicating that the prepulse-evoked excitatory response and prepulse-evoked inhibition reflected activation in two different pathways. Although many diseases such as schizophrenia may involve deficits in the inhibitory system, we do not have appropriate methods to evaluate them; therefore, the easy and non-invasive method described herein may be clinically useful.

  6. An anatomical and functional topography of human auditory cortical areas

    Directory of Open Access Journals (Sweden)

    Michelle eMoerel

    2014-07-01

    Full Text Available While advances in magnetic resonance imaging (MRI throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla. Importantly, we illustrate that - whereas a group-based approach to analyze functional (tonotopic maps is appropriate to highlight the main tonotopic axis - the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e. myelination as well as of functional properties (e.g. broadness of frequency tuning is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  7. An anatomical and functional topography of human auditory cortical areas.

    Science.gov (United States)

    Moerel, Michelle; De Martino, Federico; Formisano, Elia

    2014-01-01

    While advances in magnetic resonance imaging (MRI) throughout the last decades have enabled the detailed anatomical and functional inspection of the human brain non-invasively, to date there is no consensus regarding the precise subdivision and topography of the areas forming the human auditory cortex. Here, we propose a topography of the human auditory areas based on insights on the anatomical and functional properties of human auditory areas as revealed by studies of cyto- and myelo-architecture and fMRI investigations at ultra-high magnetic field (7 Tesla). Importantly, we illustrate that-whereas a group-based approach to analyze functional (tonotopic) maps is appropriate to highlight the main tonotopic axis-the examination of tonotopic maps at single subject level is required to detail the topography of primary and non-primary areas that may be more variable across subjects. Furthermore, we show that considering multiple maps indicative of anatomical (i.e., myelination) as well as of functional properties (e.g., broadness of frequency tuning) is helpful in identifying auditory cortical areas in individual human brains. We propose and discuss a topography of areas that is consistent with old and recent anatomical post-mortem characterizations of the human auditory cortex and that may serve as a working model for neuroscience studies of auditory functions.

  8. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  9. Effect of High-Pass Filtering on the Neonatal Auditory Brainstem Response to Air- and Bone-Conducted Clicks.

    Science.gov (United States)

    Stuart, Andrew; Yang, Edward Y.

    1994-01-01

    Simultaneous 3- channel recorded auditory brainstem responses (ABR) were obtained from 20 neonates with various high-pass filter settings and low intensity levels. Results support the advocacy of less restrictive high-pass filtering for neonatal and infant ABR screening to air-conducted and bone-conducted clicks. (Author/JDD)

  10. High background noise shapes selective auditory filters in a tropical cricket.

    Science.gov (United States)

    Schmidt, Arne K D; Riede, Klaus; Römer, Heiner

    2011-05-15

    Because of call frequency overlap and masking interference, the airborne sound channel represents a limited resource for communication in a species-rich cricket community like the tropical rainforest. Here we studied the frequency tuning of an auditory neuron mediating phonotaxis in the rainforest cricket Paroecanthus podagrosus, suffering from strong competition, in comparison with the same homologous neuron in two species of European field crickets, where such competition does not exist. As predicted, the rainforest species exhibited a more selective tuning compared with the European counterparts. The filter reduced background nocturnal noise levels by 26 dB, compared with only 16 and 10 dB in the two European species. We also quantified the performance of the sensory filter under the different filter regimes by examining the representation of the species-specific amplitude modulation of the male calling song, when embedded in background noise. Again, the filter of the rainforest cricket performed significantly better in terms of representing this important signal parameter. The neuronal representation of the calling song pattern within receivers was maintained for a wide range of signal-to-noise ratios because of the more sharply tuned sensory system and selective attention mechanisms. Finally, the rainforest cricket also showed an almost perfect match between the filter for sensitivity and the peripheral filter for directional hearing, in contrast to its European counterparts. We discuss the consequences of these adaptations for intraspecific acoustic communication and reproductive isolation between species.

  11. Functional sex differences in human primary auditory cortex

    Energy Technology Data Exchange (ETDEWEB)

    Ruytjens, Liesbet [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Georgiadis, Janniko R. [University of Groningen, University Medical Center Groningen, Department of Anatomy and Embryology, Groningen (Netherlands); Holstege, Gert [University of Groningen, University Medical Center Groningen, Center for Uroneurology, Groningen (Netherlands); Wit, Hero P. [University Medical Center Groningen, Department of Otorhinolaryngology, Groningen (Netherlands); Albers, Frans W.J. [University Medical Center Utrecht, Department Otorhinolaryngology, P.O. Box 85500, Utrecht (Netherlands); Willemsen, Antoon T.M. [University Medical Center Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands)

    2007-12-15

    We used PET to study cortical activation during auditory stimulation and found sex differences in the human primary auditory cortex (PAC). Regional cerebral blood flow (rCBF) was measured in 10 male and 10 female volunteers while listening to sounds (music or white noise) and during a baseline (no auditory stimulation). We found a sex difference in activation of the left and right PAC when comparing music to noise. The PAC was more activated by music than by noise in both men and women. But this difference between the two stimuli was significantly higher in men than in women. To investigate whether this difference could be attributed to either music or noise, we compared both stimuli with the baseline and revealed that noise gave a significantly higher activation in the female PAC than in the male PAC. Moreover, the male group showed a deactivation in the right prefrontal cortex when comparing noise to the baseline, which was not present in the female group. Interestingly, the auditory and prefrontal regions are anatomically and functionally linked and the prefrontal cortex is known to be engaged in auditory tasks that involve sustained or selective auditory attention. Thus we hypothesize that differences in attention result in a different deactivation of the right prefrontal cortex, which in turn modulates the activation of the PAC and thus explains the sex differences found in the activation of the PAC. Our results suggest that sex is an important factor in auditory brain studies. (orig.)

  12. Design of a New Audio Watermarking System Based on Human Auditory System

    Energy Technology Data Exchange (ETDEWEB)

    Shin, D.H. [Maqtech Co., Ltd., (Korea); Shin, S.W.; Kim, J.W.; Choi, J.U. [Markany Co., Ltd., (Korea); Kim, D.Y. [Bucheon College, Bucheon (Korea); Kim, S.H. [The University of Seoul, Seoul (Korea)

    2002-07-01

    In this paper, we propose a robust digital copyright-protection technique based on the concept of human auditory system. First, we propose a watermarking technique that accepts the various attacks such as, time scaling, pitch shift, add noise and a lot of lossy compression such as MP3, AAC, WMA. Second, we implement audio PD(portable device) for copyright protection using proposed method. The proposed watermarking technique is developed using digital filtering technique. Being designed according to critical band of HAS(human auditory system), the digital filters embed watermark without nearly affecting audio quality. Before processing of digital filtering, wavelet transform decomposes the input audio signal into several signals that are composed of specific frequencies. Then, we embed watermark in the decomposed signal (0kHz-11kHz) by designed band-stop digital filter. Watermarking detection algorithm is implemented on audio PD(portable device). Proposed watermarking technology embeds 2bits information per 15 seconds. If PD detects watermark '11', which means illegal song, PD displays 'Illegal Song' message on LCD, skips the song and plays the next song. The implemented detection algorithm in PD requires 19 MHz computational power, 7.9kBytes ROM and 10kBytes RAM. The suggested technique satisfies SDMI(secure digital music initiative) requirements of platform3 based on ARM9E core. (author). 9 refs., 8 figs.

  13. Myosin VIIA, important for human auditory function, is necessary for Drosophila auditory organ development.

    Directory of Open Access Journals (Sweden)

    Sokol V Todi

    Full Text Available BACKGROUND: Myosin VIIA (MyoVIIA is an unconventional myosin necessary for vertebrate audition [1]-[5]. Human auditory transduction occurs in sensory hair cells with a staircase-like arrangement of apical protrusions called stereocilia. In these hair cells, MyoVIIA maintains stereocilia organization [6]. Severe mutations in the Drosophila MyoVIIA orthologue, crinkled (ck, are semi-lethal [7] and lead to deafness by disrupting antennal auditory organ (Johnston's Organ, JO organization [8]. ck/MyoVIIA mutations result in apical detachment of auditory transduction units (scolopidia from the cuticle that transmits antennal vibrations as mechanical stimuli to JO. PRINCIPAL FINDINGS: Using flies expressing GFP-tagged NompA, a protein required for auditory organ organization in Drosophila, we examined the role of ck/MyoVIIA in JO development and maintenance through confocal microscopy and extracellular electrophysiology. Here we show that ck/MyoVIIA is necessary early in the developing antenna for initial apical attachment of the scolopidia to the articulating joint. ck/MyoVIIA is also necessary to maintain scolopidial attachment throughout adulthood. Moreover, in the adult JO, ck/MyoVIIA genetically interacts with the non-muscle myosin II (through its regulatory light chain protein and the myosin binding subunit of myosin II phosphatase. Such genetic interactions have not previously been observed in scolopidia. These factors are therefore candidates for modulating MyoVIIA activity in vertebrates. CONCLUSIONS: Our findings indicate that MyoVIIA plays evolutionarily conserved roles in auditory organ development and maintenance in invertebrates and vertebrates, enhancing our understanding of auditory organ development and function, as well as providing significant clues for future research.

  14. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  15. Representation of speech in human auditory cortex: is it special?

    Science.gov (United States)

    Steinschneider, Mitchell; Nourski, Kirill V; Fishman, Yonatan I

    2013-11-01

    Successful categorization of phonemes in speech requires that the brain analyze the acoustic signal along both spectral and temporal dimensions. Neural encoding of the stimulus amplitude envelope is critical for parsing the speech stream into syllabic units. Encoding of voice onset time (VOT) and place of articulation (POA), cues necessary for determining phonemic identity, occurs within shorter time frames. An unresolved question is whether the neural representation of speech is based on processing mechanisms that are unique to humans and shaped by learning and experience, or is based on rules governing general auditory processing that are also present in non-human animals. This question was examined by comparing the neural activity elicited by speech and other complex vocalizations in primary auditory cortex of macaques, who are limited vocal learners, with that in Heschl's gyrus, the putative location of primary auditory cortex in humans. Entrainment to the amplitude envelope is neither specific to humans nor to human speech. VOT is represented by responses time-locked to consonant release and voicing onset in both humans and monkeys. Temporal representation of VOT is observed both for isolated syllables and for syllables embedded in the more naturalistic context of running speech. The fundamental frequency of male speakers is represented by more rapid neural activity phase-locked to the glottal pulsation rate in both humans and monkeys. In both species, the differential representation of stop consonants varying in their POA can be predicted by the relationship between the frequency selectivity of neurons and the onset spectra of the speech sounds. These findings indicate that the neurophysiology of primary auditory cortex is similar in monkeys and humans despite their vastly different experience with human speech, and that Heschl's gyrus is engaged in general auditory, and not language-specific, processing. This article is part of a Special Issue entitled

  16. Modulating human auditory processing by transcranial electrical stimulation

    Directory of Open Access Journals (Sweden)

    Kai eHeimrath

    2016-03-01

    Full Text Available Transcranial electrical stimulation (tES has become a valuable research tool for the investigation of neurophysiological processes underlying human action and cognition. In recent years, striking evidence for the neuromodulatory effects of transcranial direct current stimulation (tDCS, transcranial alternating current stimulation (tACS, and transcranial random noise stimulation (tRNS has emerged. However, while the wealth of knowledge has been gained about tES in the motor domain and, to a lesser extent, about its ability to modulate human cognition, surprisingly little is known about its impact on perceptual processing, particularly in the auditory domain. Moreover, while only a few studies systematically investigated the impact of auditory tES, it has already been applied in a large number of clinical trials, leading to a remarkable imbalance between basic and clinical research on auditory tES. Here, we review the state of the art of tES application in the auditory domain focussing on the impact of neuromodulation on acoustic perception and its potential for clinical application in the treatment of auditory related disorders.

  17. Differences in auditory timing between human and nonhuman primates

    NARCIS (Netherlands)

    Honing, H.; Merchant, H.

    2014-01-01

    The gradual audiomotor evolution hypothesis is proposed as an alternative interpretation to the auditory timing mechanisms discussed in Ackermann et al.'s article. This hypothesis accommodates the fact that the performance of nonhuman primates is comparable to humans in single-interval tasks (such

  18. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Science.gov (United States)

    Lehmann, Alexandre; Schönwiesner, Marc

    2014-01-01

    Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  19. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues.

    Directory of Open Access Journals (Sweden)

    Alexandre Lehmann

    Full Text Available Selective attention is the mechanism that allows focusing one's attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.

  20. Human auditory steady state responses to binaural and monaural beats.

    Science.gov (United States)

    Schwarz, D W F; Taylor, P

    2005-03-01

    Binaural beat sensations depend upon a central combination of two different temporally encoded tones, separately presented to the two ears. We tested the feasibility to record an auditory steady state evoked response (ASSR) at the binaural beat frequency in order to find a measure for temporal coding of sound in the human EEG. We stimulated each ear with a distinct tone, both differing in frequency by 40Hz, to record a binaural beat ASSR. As control, we evoked a beat ASSR in response to both tones in the same ear. We band-pass filtered the EEG at 40Hz, averaged with respect to stimulus onset and compared ASSR amplitudes and phases, extracted from a sinusoidal non-linear regression fit to a 40Hz period average. A 40Hz binaural beat ASSR was evoked at a low mean stimulus frequency (400Hz) but became undetectable beyond 3kHz. Its amplitude was smaller than that of the acoustic beat ASSR, which was evoked at low and high frequencies. Both ASSR types had maxima at fronto-central leads and displayed a fronto-occipital phase delay of several ms. The dependence of the 40Hz binaural beat ASSR on stimuli at low, temporally coded tone frequencies suggests that it may objectively assess temporal sound coding ability. The phase shift across the electrode array is evidence for more than one origin of the 40Hz oscillations. The binaural beat ASSR is an evoked response, with novel diagnostic potential, to a signal that is not present in the stimulus, but generated within the brain.

  1. Task-specific modulation of human auditory evoked responses in a delayed-match-to-sample task

    Directory of Open Access Journals (Sweden)

    Feng eRong

    2011-05-01

    Full Text Available In this study, we focus our investigation on task-specific cognitive modulation of early cortical auditory processing in human cerebral cortex. During the experiments, we acquired whole-head magnetoencephalography (MEG data while participants were performing an auditory delayed-match-to-sample (DMS task and associated control tasks. Using a spatial filtering beamformer technique to simultaneously estimate multiple source activities inside the human brain, we observed a significant DMS-specific suppression of the auditory evoked response to the second stimulus in a sound pair, with the center of the effect being located in the vicinity of the left auditory cortex. For the right auditory cortex, a non-invariant suppression effect was observed in both DMS and control tasks. Furthermore, analysis of coherence revealed a beta band (12 ~ 20 Hz DMS-specific enhanced functional interaction between the sources in left auditory cortex and those in left inferior frontal gyrus, which has been shown to involve in short-term memory processing during the delay period of DMS task. Our findings support the view that early evoked cortical responses to incoming acoustic stimuli can be modulated by task-specific cognitive functions by means of frontal-temporal functional interactions.

  2. Cortical oscillations in auditory perception and speech: evidence for two temporal windows in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Huan eLuo

    2012-05-01

    Full Text Available Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (~20-80 ms duration information and the theta band (~150-300 ms, corresponding to segmental and syllabic modulation rates, respectively. On one hypothesis, auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that these non-speech stimuli with temporal structure matching speech-relevant scales (~25 ms and ~200 ms elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands. In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST. The data argue for a macroscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

  3. Stimulation of the human auditory nerve with optical radiation

    Science.gov (United States)

    Fishman, Andrew; Winkler, Piotr; Mierzwinski, Jozef; Beuth, Wojciech; Izzo Matic, Agnella; Siedlecki, Zygmunt; Teudt, Ingo; Maier, Hannes; Richter, Claus-Peter

    2009-02-01

    A novel, spatially selective method to stimulate cranial nerves has been proposed: contact free stimulation with optical radiation. The radiation source is an infrared pulsed laser. The Case Report is the first report ever that shows that optical stimulation of the auditory nerve is possible in the human. The ethical approach to conduct any measurements or tests in humans requires efficacy and safety studies in animals, which have been conducted in gerbils. This report represents the first step in a translational research project to initiate a paradigm shift in neural interfaces. A patient was selected who required surgical removal of a large meningioma angiomatum WHO I by a planned transcochlear approach. Prior to cochlear ablation by drilling and subsequent tumor resection, the cochlear nerve was stimulated with a pulsed infrared laser at low radiation energies. Stimulation with optical radiation evoked compound action potentials from the human auditory nerve. Stimulation of the auditory nerve with infrared laser pulses is possible in the human inner ear. The finding is an important step for translating results from animal experiments to human and furthers the development of a novel interface that uses optical radiation to stimulate neurons. Additional measurements are required to optimize the stimulation parameters.

  4. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain

    Science.gov (United States)

    Martínez, I.; Rosa, M.; Arsuaga, J.-L.; Jarabo, P.; Quam, R.; Lorenzo, C.; Gracia, A.; Carretero, J.-M.; de Castro, J.-M. Bermúdez; Carbonell, E.

    2004-01-01

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range. PMID:15213327

  5. Auditory capacities in Middle Pleistocene humans from the Sierra de Atapuerca in Spain.

    Science.gov (United States)

    Martínez, I; Rosa, M; Arsuaga, J-L; Jarabo, P; Quam, R; Lorenzo, C; Gracia, A; Carretero, J-M; Bermúdez de Castro, J-M; Carbonell, E

    2004-07-06

    Human hearing differs from that of chimpanzees and most other anthropoids in maintaining a relatively high sensitivity from 2 kHz up to 4 kHz, a region that contains relevant acoustic information in spoken language. Knowledge of the auditory capacities in human fossil ancestors could greatly enhance the understanding of when this human pattern emerged during the course of our evolutionary history. Here we use a comprehensive physical model to analyze the influence of skeletal structures on the acoustic filtering of the outer and middle ears in five fossil human specimens from the Middle Pleistocene site of the Sima de los Huesos in the Sierra de Atapuerca of Spain. Our results show that the skeletal anatomy in these hominids is compatible with a human-like pattern of sound power transmission through the outer and middle ear at frequencies up to 5 kHz, suggesting that they already had auditory capacities similar to those of living humans in this frequency range.

  6. Mode-locking neurodynamics predict human auditory brainstem responses to musical intervals.

    Science.gov (United States)

    Lerud, Karl D; Almonte, Felix V; Kim, Ji Chul; Large, Edward W

    2014-02-01

    The auditory nervous system is highly nonlinear. Some nonlinear responses arise through active processes in the cochlea, while others may arise in neural populations of the cochlear nucleus, inferior colliculus and higher auditory areas. In humans, auditory brainstem recordings reveal nonlinear population responses to combinations of pure tones, and to musical intervals composed of complex tones. Yet the biophysical origin of central auditory nonlinearities, their signal processing properties, and their relationship to auditory perception remain largely unknown. Both stimulus components and nonlinear resonances are well represented in auditory brainstem nuclei due to neural phase-locking. Recently mode-locking, a generalization of phase-locking that implies an intrinsically nonlinear processing of sound, has been observed in mammalian auditory brainstem nuclei. Here we show that a canonical model of mode-locked neural oscillation predicts the complex nonlinear population responses to musical intervals that have been observed in the human brainstem. The model makes predictions about auditory signal processing and perception that are different from traditional delay-based models, and may provide insight into the nature of auditory population responses. We anticipate that the application of dynamical systems analysis will provide the starting point for generic models of auditory population dynamics, and lead to a deeper understanding of nonlinear auditory signal processing possibly arising in excitatory-inhibitory networks of the central auditory nervous system. This approach has the potential to link neural dynamics with the perception of pitch, music, and speech, and lead to dynamical models of auditory system development.

  7. Complex-tone pitch representations in the human auditory system

    DEFF Research Database (Denmark)

    Bianchi, Federica

    ) listeners and the effect of musical training for pitch discrimination of complex tones with resolved and unresolved harmonics. Concerning the first topic, behavioral and modeling results in listeners with sensorineural hearing loss (SNHL) indicated that temporal envelope cues of complex tones...... for the individual pitch-discrimination abilities, the musically trained listeners still allocated lower processing effort than did the non-musicians to perform the task at the same performance level. This finding suggests an enhanced pitch representation along the auditory system in musicians, possibly as a result......Understanding how the human auditory system processes the physical properties of an acoustical stimulus to give rise to a pitch percept is a fascinating aspect of hearing research. Since most natural sounds are harmonic complex tones, this work focused on the nature of pitch-relevant cues...

  8. Tracking of human head with particle filter

    Institute of Scientific and Technical Information of China (English)

    GUO Chao

    2009-01-01

    To cope with the problem of tracking a human head in a complicated scene, we propose a method that adopts human skin color and hair color integrated with a kind of particle filter named condensation algorithm. Firstly, a novel method is presented to set up human head color model using skin color and hair color separately based on region growing. Compared with traditional human face model, this method is more precise and works well when human turns around and the face disappears in the image. Then a novel method is presented to use color model in condensation algorithm more effectively. In this method, a combination of edge detection result, color segmentation result and color edge detection result in an Omega window is used to measure the scale and position of human head in condensation. Experiments show that this approach can track human head in complicated scene even when human turns around or the distance of tracking a human head changes quickly.

  9. Interaction between auditory and visual stimulus relating to the vowel sounds in the auditory cortex in humans: a magnetoencephalographic study.

    Science.gov (United States)

    Miki, Kensaku; Watanabe, Shoko; Kakigi, Ryusuke

    2004-03-11

    We investigated the interaction between auditory and visual stimulus relating to the vowel sounds in the auditory cortex in humans, using magnetoencephalography. We compared the difference in the main component, M100 generated in the auditory cortex, in terms of peak latency, amplitude, dipole location and moment, following the vowel sound_/a/_between two conditions: (1) showing a face with closed mouth; and (2) showing the same face with mouth movement appearing to pronounce/a/using an apparent motion method. We found no significant difference in the M100 component between the two conditions within or between the right and left hemispheres. These findings indicated that the vowel sound perception in the auditory cortex, at least in the primary processing stage, was not affected by viewing mouth movement.

  10. Biased relevance filtering in the auditory system: A test of confidence-weighted first-impressions.

    Science.gov (United States)

    Mullens, D; Winkler, I; Damaso, K; Heathcote, A; Whitson, L; Provost, A; Todd, J

    2016-03-01

    Although first-impressions are known to impact decision-making and to have prolonged effects on reasoning, it is less well known that the same type of rapidly formed assumptions can explain biases in automatic relevance filtering outside of deliberate behavior. This paper features two studies in which participants have been asked to ignore sequences of sound while focusing attention on a silent movie. The sequences consisted of blocks, each with a high-probability repetition interrupted by rare acoustic deviations (i.e., a sound of different pitch or duration). The probabilities of the two different sounds alternated across the concatenated blocks within the sequence (i.e., short-to-long and long-to-short). The sound probabilities are rapidly and automatically learned for each block and a perceptual inference is formed predicting the most likely characteristics of the upcoming sound. Deviations elicit a prediction-error signal known as mismatch negativity (MMN). Computational models of MMN generally assume that its elicitation is governed by transition statistics that define what sound attributes are most likely to follow the current sound. MMN amplitude reflects prediction confidence, which is derived from the stability of the current transition statistics. However, our prior research showed that MMN amplitude is modulated by a strong first-impression bias that outweighs transition statistics. Here we test the hypothesis that this bias can be attributed to assumptions about predictable vs. unpredictable nature of each tone within the first encountered context, which is weighted by the stability of that context. The results of Study 1 show that this bias is initially prevented if there is no 1:1 mapping between sound attributes and probability, but it returns once the auditory system determines which properties provide the highest predictive value. The results of Study 2 show that confidence in the first-impression bias drops if assumptions about the temporal

  11. The auditory representation of speech sounds in human motor cortex

    Science.gov (United States)

    Cheung, Connie; Hamilton, Liberty S; Johnson, Keith; Chang, Edward F

    2016-01-01

    In humans, listening to speech evokes neural responses in the motor cortex. This has been controversially interpreted as evidence that speech sounds are processed as articulatory gestures. However, it is unclear what information is actually encoded by such neural activity. We used high-density direct human cortical recordings while participants spoke and listened to speech sounds. Motor cortex neural patterns during listening were substantially different than during articulation of the same sounds. During listening, we observed neural activity in the superior and inferior regions of ventral motor cortex. During speaking, responses were distributed throughout somatotopic representations of speech articulators in motor cortex. The structure of responses in motor cortex during listening was organized along acoustic features similar to auditory cortex, rather than along articulatory features as during speaking. Motor cortex does not contain articulatory representations of perceived actions in speech, but rather, represents auditory vocal information. DOI: http://dx.doi.org/10.7554/eLife.12577.001 PMID:26943778

  12. Tinnitus alters resting state functional connectivity (RSFC) in human auditory and non-auditory brain regions as measured by functional near-infrared spectroscopy (fNIRS).

    Science.gov (United States)

    San Juan, Juan; Hu, Xiao-Su; Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory

    2017-01-01

    Tinnitus, or phantom sound perception, leads to increased spontaneous neural firing rates and enhanced synchrony in central auditory circuits in animal models. These putative physiologic correlates of tinnitus to date have not been well translated in the brain of the human tinnitus sufferer. Using functional near-infrared spectroscopy (fNIRS) we recently showed that tinnitus in humans leads to maintained hemodynamic activity in auditory and adjacent, non-auditory cortices. Here we used fNIRS technology to investigate changes in resting state functional connectivity between human auditory and non-auditory brain regions in normal-hearing, bilateral subjective tinnitus and controls before and after auditory stimulation. Hemodynamic activity was monitored over the region of interest (primary auditory cortex) and non-region of interest (adjacent non-auditory cortices) and functional brain connectivity was measured during a 60-second baseline/period of silence before and after a passive auditory challenge consisting of alternating pure tones (750 and 8000Hz), broadband noise and silence. Functional connectivity was measured between all channel-pairs. Prior to stimulation, connectivity of the region of interest to the temporal and fronto-temporal region was decreased in tinnitus participants compared to controls. Overall, connectivity in tinnitus was differentially altered as compared to controls following sound stimulation. Enhanced connectivity was seen in both auditory and non-auditory regions in the tinnitus brain, while controls showed a decrease in connectivity following sound stimulation. In tinnitus, the strength of connectivity was increased between auditory cortex and fronto-temporal, fronto-parietal, temporal, occipito-temporal and occipital cortices. Together these data suggest that central auditory and non-auditory brain regions are modified in tinnitus and that resting functional connectivity measured by fNIRS technology may contribute to conscious phantom

  13. Transient human auditory cortex activation during volitional attention shifting.

    Science.gov (United States)

    Uhlig, Christian Harm; Gutschalk, Alexander

    2017-01-01

    While strong activation of auditory cortex is generally found for exogenous orienting of attention, endogenous, intra-modal shifting of auditory attention has not yet been demonstrated to evoke transient activation of the auditory cortex. Here, we used fMRI to test if endogenous shifting of attention is also associated with transient activation of the auditory cortex. In contrast to previous studies, attention shifts were completely self-initiated and not cued by transient auditory or visual stimuli. Stimuli were two dichotic, continuous streams of tones, whose perceptual grouping was not ambiguous. Participants were instructed to continuously focus on one of the streams and switch between the two after a while, indicating the time and direction of each attentional shift by pressing one of two response buttons. The BOLD response around the time of the button presses revealed robust activation of the auditory cortex, along with activation of a distributed task network. To test if the transient auditory cortex activation was specifically related to auditory orienting, a self-paced motor task was added, where participants were instructed to ignore the auditory stimulation while they pressed the response buttons in alternation and at a similar pace. Results showed that attentional orienting produced stronger activity in auditory cortex, but auditory cortex activation was also observed for button presses without focused attention to the auditory stimulus. The response related to attention shifting was stronger contralateral to the side where attention was shifted to. Contralateral-dominant activation was also observed in dorsal parietal cortex areas, confirming previous observations for auditory attention shifting in studies that used auditory cues.

  14. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Directory of Open Access Journals (Sweden)

    Yuko Hattori

    Full Text Available Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum and auditory rhythms (e.g., hearing music while walking. Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse, suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  15. Distractor Effect of Auditory Rhythms on Self-Paced Tapping in Chimpanzees and Humans.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2015-01-01

    Humans tend to spontaneously align their movements in response to visual (e.g., swinging pendulum) and auditory rhythms (e.g., hearing music while walking). Particularly in the case of the response to auditory rhythms, neuroscientific research has indicated that motor resources are also recruited while perceiving an auditory rhythm (or regular pulse), suggesting a tight link between the auditory and motor systems in the human brain. However, the evolutionary origin of spontaneous responses to auditory rhythms is unclear. Here, we report that chimpanzees and humans show a similar distractor effect in perceiving isochronous rhythms during rhythmic movement. We used isochronous auditory rhythms as distractor stimuli during self-paced alternate tapping of two keys of an electronic keyboard by humans and chimpanzees. When the tempo was similar to their spontaneous motor tempo, tapping onset was influenced by intermittent entrainment to auditory rhythms. Although this effect itself is not an advanced rhythmic ability such as dancing or singing, our results suggest that, to some extent, the biological foundation for spontaneous responses to auditory rhythms was already deeply rooted in the common ancestor of chimpanzees and humans, 6 million years ago. This also suggests the possibility of a common attentional mechanism, as proposed by the dynamic attending theory, underlying the effect of perceiving external rhythms on motor movement.

  16. Frequency-specific modulation of population-level frequency tuning in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Roberts Larry E

    2009-01-01

    Full Text Available Abstract Background Under natural circumstances, attention plays an important role in extracting relevant auditory signals from simultaneously present, irrelevant noises. Excitatory and inhibitory neural activity, enhanced by attentional processes, seems to sharpen frequency tuning, contributing to improved auditory performance especially in noisy environments. In the present study, we investigated auditory magnetic fields in humans that were evoked by pure tones embedded in band-eliminated noises during two different stimulus sequencing conditions (constant vs. random under auditory focused attention by means of magnetoencephalography (MEG. Results In total, we used identical auditory stimuli between conditions, but presented them in a different order, thereby manipulating the neural processing and the auditory performance of the listeners. Constant stimulus sequencing blocks were characterized by the simultaneous presentation of pure tones of identical frequency with band-eliminated noises, whereas random sequencing blocks were characterized by the simultaneous presentation of pure tones of random frequencies and band-eliminated noises. We demonstrated that auditory evoked neural responses were larger in the constant sequencing compared to the random sequencing condition, particularly when the simultaneously presented noises contained narrow stop-bands. Conclusion The present study confirmed that population-level frequency tuning in human auditory cortex can be sharpened in a frequency-specific manner. This frequency-specific sharpening may contribute to improved auditory performance during detection and processing of relevant sound inputs characterized by specific frequency distributions in noisy environments.

  17. Human-Manipulator Interface Using Particle Filter

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2014-01-01

    Full Text Available This paper utilizes a human-robot interface system which incorporates particle filter (PF and adaptive multispace transformation (AMT to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.

  18. Human-manipulator interface using particle filter.

    Science.gov (United States)

    Du, Guanglong; Zhang, Ping; Wang, Xueqian

    2014-01-01

    This paper utilizes a human-robot interface system which incorporates particle filter (PF) and adaptive multispace transformation (AMT) to track the pose of the human hand for controlling the robot manipulator. This system employs a 3D camera (Kinect) to determine the orientation and the translation of the human hand. We use Camshift algorithm to track the hand. PF is used to estimate the translation of the human hand. Although a PF is used for estimating the translation, the translation error increases in a short period of time when the sensors fail to detect the hand motion. Therefore, a methodology to correct the translation error is required. What is more, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. This paper proposes an adaptive multispace transformation (AMT) method to assist the operator to improve the accuracy and reliability in determining the pose of the robot. The human-robot interface system was experimentally tested in a lab environment, and the results indicate that such a system can successfully control a robot manipulator.

  19. Hemodynamic responses in human multisensory and auditory association cortex to purely visual stimulation

    Directory of Open Access Journals (Sweden)

    Baumann Simon

    2007-02-01

    Full Text Available Abstract Background Recent findings of a tight coupling between visual and auditory association cortices during multisensory perception in monkeys and humans raise the question whether consistent paired presentation of simple visual and auditory stimuli prompts conditioned responses in unimodal auditory regions or multimodal association cortex once visual stimuli are presented in isolation in a post-conditioning run. To address this issue fifteen healthy participants partook in a "silent" sparse temporal event-related fMRI study. In the first (visual control habituation phase they were presented with briefly red flashing visual stimuli. In the second (auditory control habituation phase they heard brief telephone ringing. In the third (conditioning phase we coincidently presented the visual stimulus (CS paired with the auditory stimulus (UCS. In the fourth phase participants either viewed flashes paired with the auditory stimulus (maintenance, CS- or viewed the visual stimulus in isolation (extinction, CS+ according to a 5:10 partial reinforcement schedule. The participants had no other task than attending to the stimuli and indicating the end of each trial by pressing a button. Results During unpaired visual presentations (preceding and following the paired presentation we observed significant brain responses beyond primary visual cortex in the bilateral posterior auditory association cortex (planum temporale, planum parietale and in the right superior temporal sulcus whereas the primary auditory regions were not involved. By contrast, the activity in auditory core regions was markedly larger when participants were presented with auditory stimuli. Conclusion These results demonstrate involvement of multisensory and auditory association areas in perception of unimodal visual stimulation which may reflect the instantaneous forming of multisensory associations and cannot be attributed to sensation of an auditory event. More importantly, we are able

  20. Empathy and the somatotopic auditory mirror system in humans

    NARCIS (Netherlands)

    Gazzola, Valeria; Aziz-Zadeh, Lisa; Keysers, Christian

    2006-01-01

    How do we understand the actions of other individuals if we can only hear them? Auditory mirror neurons respond both while monkeys perform hand or mouth actions and while they listen to sounds of similar actions [1, 2]. This system might be critical for auditory action understanding and language

  1. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  2. Comment on "An approximate transfer function for the dual-resonance nonlinear filter model of auditory frequency selectivity" [J. Acoust. Soc. Am. 114, 2112-21171 (L)

    NARCIS (Netherlands)

    Duifhuis, H

    This letter concerns the paper "An approximate transfer function for the dual-resonance nonlinear filter model of auditory frequency selectivity" [E. A. Lopez-Poveda, J. Acoust. Soc. Am. 114, 2112-2117 (2003)]. It proposes a correction of the historical framework in which the paper is presented.

  3. "To ear is human, to frogive is divine": Bob Capranica's legacy to auditory neuroethology.

    Science.gov (United States)

    Simmons, Andrea Megela

    2013-03-01

    Bob Capranica was a towering figure in the field of auditory neuroethology. Among his many contributions are the exploitation of the anuran auditory system as a general vertebrate model for studying communication, the introduction of a signal processing approach for quantifying sender-receiver dynamics, and the concept of the matched filter for efficient neural processing of complex vocal signals. In this paper, meant to honor Bob on his election to Fellow of the International Society for Neuroethology, I provide a description and analysis of some of his most important research, and I highlight how the concepts and data he contributed still inspire neuroethology today.

  4. Modulatory effects of spectral energy contrasts on lateral inhibition in the human auditory cortex: an MEG study.

    Directory of Open Access Journals (Sweden)

    Alwina Stein

    Full Text Available We investigated the modulation of lateral inhibition in the human auditory cortex by means of magnetoencephalography (MEG. In the first experiment, five acoustic masking stimuli (MS, consisting of noise passing through a digital notch filter which was centered at 1 kHz, were presented. The spectral energy contrasts of four MS were modified systematically by either amplifying or attenuating the edge-frequency bands around the notch (EFB by 30 dB. Additionally, the width of EFB amplification/attenuation was varied (3/8 or 7/8 octave on each side of the notch. N1m and auditory steady state responses (ASSR, evoked by a test stimulus with a carrier frequency of 1 kHz, were evaluated. A consistent dependence of N1m responses upon the preceding MS was observed. The minimal N1m source strength was found in the narrowest amplified EFB condition, representing pronounced lateral inhibition of neurons with characteristic frequencies corresponding to the center frequency of the notch (NOTCH CF in secondary auditory cortical areas. We tested in a second experiment whether an even narrower bandwidth of EFB amplification would result in further enhanced lateral inhibition of the NOTCH CF. Here three MS were presented, two of which were modified by amplifying 1/8 or 1/24 octave EFB width around the notch. We found that N1m responses were again significantly smaller in both amplified EFB conditions as compared to the NFN condition. To our knowledge, this is the first study demonstrating that the energy and width of the EFB around the notch modulate lateral inhibition in human secondary auditory cortical areas. Because it is assumed that chronic tinnitus is caused by a lack of lateral inhibition, these new insights could be used as a tool for further improvement of tinnitus treatments focusing on the lateral inhibition of neurons corresponding to the tinnitus frequency, such as the tailor-made notched music training.

  5. Coding of melodic gestalt in human auditory cortex.

    Science.gov (United States)

    Schindler, Andreas; Herdener, Marcus; Bartels, Andreas

    2013-12-01

    The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.

  6. The effect of precision and power grips on activations in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Patrik Alexander Wikman

    2015-10-01

    Full Text Available The neuroanatomical pathways interconnecting auditory and motor cortices play a key role in current models of human auditory cortex (AC. Evidently, auditory-motor interaction is important in speech and music production, but the significance of these cortical pathways in other auditory processing is not well known. We investigated the general effects of motor responding on AC activations to sounds during auditory and visual tasks. During all task blocks, subjects detected targets in the designated modality, reported the relative number of targets at the end of the block, and ignored the stimuli presented in the opposite modality. In each block, they were also instructed to respond to targets either using a precision grip, power grip, or to give no overt target responses. We found that motor responding strongly modulated AC activations. First, during both visual and auditory tasks, activations in widespread regions of AC decreased when subjects made precision and power grip responses to targets. Second, activations in AC were modulated by grip type during the auditory but not during the visual task. Further, the motor effects were distinct from the strong attention-related modulations in AC. These results are consistent with the idea that operations in AC are shaped by its connections with motor cortical regions.

  7. Categorical vowel perception enhances the effectiveness and generalization of auditory feedback in human-machine-interfaces.

    Directory of Open Access Journals (Sweden)

    Eric Larson

    Full Text Available Human-machine interface (HMI designs offer the possibility of improving quality of life for patient populations as well as augmenting normal user function. Despite pragmatic benefits, utilizing auditory feedback for HMI control remains underutilized, in part due to observed limitations in effectiveness. The goal of this study was to determine the extent to which categorical speech perception could be used to improve an auditory HMI. Using surface electromyography, 24 healthy speakers of American English participated in 4 sessions to learn to control an HMI using auditory feedback (provided via vowel synthesis. Participants trained on 3 targets in sessions 1-3 and were tested on 3 novel targets in session 4. An "established categories with text cues" group of eight participants were trained and tested on auditory targets corresponding to standard American English vowels using auditory and text target cues. An "established categories without text cues" group of eight participants were trained and tested on the same targets using only auditory cuing of target vowel identity. A "new categories" group of eight participants were trained and tested on targets that corresponded to vowel-like sounds not part of American English. Analyses of user performance revealed significant effects of session and group (established categories groups and the new categories group, and a trend for an interaction between session and group. Results suggest that auditory feedback can be effectively used for HMI operation when paired with established categorical (native vowel targets with an unambiguous cue.

  8. Segmental processing in the human auditory dorsal stream.

    Science.gov (United States)

    Zaehle, Tino; Geiser, Eveline; Alter, Kai; Jancke, Lutz; Meyer, Martin

    2008-07-18

    In the present study we investigated the functional organization of sublexical auditory perception with specific respect to auditory spectro-temporal processing in speech and non-speech sounds. Participants discriminated verbal and nonverbal auditory stimuli according to either spectral or temporal acoustic features in the context of a sparse event-related functional magnetic resonance imaging (fMRI) study. Based on recent models of speech processing, we hypothesized that auditory segmental processing, as is required in the discrimination of speech and non-speech sound according to its temporal features, will lead to a specific involvement of a left-hemispheric dorsal processing network comprising the posterior portion of the inferior frontal cortex and the inferior parietal lobe. In agreement with our hypothesis results revealed significant responses in the posterior part of the inferior frontal gyrus and the parietal operculum of the left hemisphere when participants had to discriminate speech and non-speech stimuli based on subtle temporal acoustic features. In contrast, when participants had to discriminate speech and non-speech stimuli on the basis of changes in the frequency content, we observed bilateral activations along the middle temporal gyrus and superior temporal sulcus. The results of the present study demonstrate an involvement of the dorsal pathway in the segmental sublexical analysis of speech sounds as well as in the segmental acoustic analysis of non-speech sounds with analogous spectro-temporal characteristics.

  9. Active stream segregation specifically involves the left human auditory cortex.

    Science.gov (United States)

    Deike, Susann; Scheich, Henning; Brechmann, André

    2010-06-14

    An important aspect of auditory scene analysis is the sequential grouping of similar sounds into one "auditory stream" while keeping competing streams separate. In the present low-noise fMRI study we presented sequences of alternating high-pitch (A) and low-pitch (B) complex harmonic tones using acoustic parameters that allow the perception of either two separate streams or one alternating stream. However, the subjects were instructed to actively and continuously segregate the A from the B stream. This was controlled by the additional instruction to listen for rare level deviants only in the low-pitch stream. Compared to the control condition in which only one non-separable stream was presented the active segregation of the A from the B stream led to a selective increase of activation in the left auditory cortex (AC). Together with a similar finding from a previous study using a different acoustic cue for streaming, namely timbre, this suggests that the left auditory cortex plays a dominant role in active sequential stream segregation. However, we found cue differences within the left AC: Whereas in the posterior areas, including the planum temporale, activation increased for both acoustic cues, the anterior areas, including Heschl's gyrus, are only involved in stream segregation based on pitch.

  10. Auditory-Visual Perception of Changing Distance by Human Infants.

    Science.gov (United States)

    Walker-Andrews, Arlene S.; Lennon, Elizabeth M.

    1985-01-01

    Examines, in two experiments, 5-month-old infants' sensitivity to auditory-visual specification of distance and direction of movement. One experiment presented two films with soundtracks in either a match or mismatch condition; the second showed the two films side-by-side with a single soundtrack appropriate to one. Infants demonstrated visual…

  11. Neuronal representations of distance in human auditory cortex.

    Science.gov (United States)

    Kopčo, Norbert; Huang, Samantha; Belliveau, John W; Raij, Tommi; Tengshe, Chinmayi; Ahveninen, Jyrki

    2012-07-03

    Neuronal mechanisms of auditory distance perception are poorly understood, largely because contributions of intensity and distance processing are difficult to differentiate. Typically, the received intensity increases when sound sources approach us. However, we can also distinguish between soft-but-nearby and loud-but-distant sounds, indicating that distance processing can also be based on intensity-independent cues. Here, we combined behavioral experiments, fMRI measurements, and computational analyses to identify the neural representation of distance independent of intensity. In a virtual reverberant environment, we simulated sound sources at varying distances (15-100 cm) along the right-side interaural axis. Our acoustic analysis suggested that, of the individual intensity-independent depth cues available for these stimuli, direct-to-reverberant ratio (D/R) is more reliable and robust than interaural level difference (ILD). However, on the basis of our behavioral results, subjects' discrimination performance was more consistent with complex intensity-independent distance representations, combining both available cues, than with representations on the basis of either D/R or ILD individually. fMRI activations to sounds varying in distance (containing all cues, including intensity), compared with activations to sounds varying in intensity only, were significantly increased in the planum temporale and posterior superior temporal gyrus contralateral to the direction of stimulation. This fMRI result suggests that neurons in posterior nonprimary auditory cortices, in or near the areas processing other auditory spatial features, are sensitive to intensity-independent sound properties relevant for auditory distance perception.

  12. Task-specific reorganization of the auditory cortex in deaf humans.

    Science.gov (United States)

    Bola, Łukasz; Zimmermann, Maria; Mostowski, Piotr; Jednoróg, Katarzyna; Marchewka, Artur; Rutkowski, Paweł; Szwed, Marcin

    2017-01-24

    The principles that guide large-scale cortical reorganization remain unclear. In the blind, several visual regions preserve their task specificity; ventral visual areas, for example, become engaged in auditory and tactile object-recognition tasks. It remains open whether task-specific reorganization is unique to the visual cortex or, alternatively, whether this kind of plasticity is a general principle applying to other cortical areas. Auditory areas can become recruited for visual and tactile input in the deaf. Although nonhuman data suggest that this reorganization might be task specific, human evidence has been lacking. Here we enrolled 15 deaf and 15 hearing adults into an functional MRI experiment during which they discriminated between temporally complex sequences of stimuli (rhythms). Both deaf and hearing subjects performed the task visually, in the central visual field. In addition, hearing subjects performed the same task in the auditory modality. We found that the visual task robustly activated the auditory cortex in deaf subjects, peaking in the posterior-lateral part of high-level auditory areas. This activation pattern was strikingly similar to the pattern found in hearing subjects performing the auditory version of the task. Although performing the visual task in deaf subjects induced an increase in functional connectivity between the auditory cortex and the dorsal visual cortex, no such effect was found in hearing subjects. We conclude that in deaf humans the high-level auditory cortex switches its input modality from sound to vision but preserves its task-specific activation pattern independent of input modality. Task-specific reorganization thus might be a general principle that guides cortical plasticity in the brain.

  13. Exponential processes in human auditory excitation and adaptation.

    Science.gov (United States)

    Formby, C; Rutledge, J C; Sherlock, L P

    2002-02-01

    Peripheral auditory adaptation has been studied extensively in animal models, and multiple exponential components have been identified. This study explores the feasibility of estimating these component processes for human listeners with a peripheral model of adaptation. The processes were estimated from off-frequency masked detection data that probed temporal masking responses to a gated narrowband masker. The resulting response patterns reflected step-like onset and offset features with characteristically little evidence of confounding backward and forward masking. The model was implemented with linear combinations of exponential functions to represent the unadapted excitation response to gating the masker on and then off and the opposing effects of adaptation in each instance. The onset and offset of the temporal masking response were assumed to be approximately inverse operations and were modeled independently in this scheme. The unadapted excitation response at masker onset and the reversed excitation response at masker offset were each represented in the model by a single exponential function. The adaptation processes were modeled by three independent exponential functions, which were reversed at masker offset. Each adaptation component was subtractive and partially negated the unadapted excitation response to the dynamic masker. This scheme allowed for quantification of the response amplitude, action latency, and time constant for the unadapted excitation component and for each adaptation component. The results reveal that (1) the amplitudes of the unadapted excitation and reversed excitation components grow nonlinearly with masker level and mirror the 'compressive' input-output velocity response of the basilar membrane; (2) the time constants for the unadapted excitation and reversed excitation components are related inversely to masker intensity, which is compatible with neural synchrony increasing at masker onset (or offset) with increasing masker strength

  14. Functional maps of human auditory cortex: effects of acoustic features and attention.

    Directory of Open Access Journals (Sweden)

    David L Woods

    Full Text Available BACKGROUND: While human auditory cortex is known to contain tonotopically organized auditory cortical fields (ACFs, little is known about how processing in these fields is modulated by other acoustic features or by attention. METHODOLOGY/PRINCIPAL FINDINGS: We used functional magnetic resonance imaging (fMRI and population-based cortical surface analysis to characterize the tonotopic organization of human auditory cortex and analyze the influence of tone intensity, ear of delivery, scanner background noise, and intermodal selective attention on auditory cortex activations. Medial auditory cortex surrounding Heschl's gyrus showed large sensory (unattended activations with two mirror-symmetric tonotopic fields similar to those observed in non-human primates. Sensory responses in medial regions had symmetrical distributions with respect to the left and right hemispheres, were enlarged for tones of increased intensity, and were enhanced when sparse image acquisition reduced scanner acoustic noise. Spatial distribution analysis suggested that changes in tone intensity shifted activation within isofrequency bands. Activations to monaural tones were enhanced over the hemisphere contralateral to stimulation, where they produced activations similar to those produced by binaural sounds. Lateral regions of auditory cortex showed small sensory responses that were larger in the right than left hemisphere, lacked tonotopic organization, and were uninfluenced by acoustic parameters. Sensory responses in both medial and lateral auditory cortex decreased in magnitude throughout stimulus blocks. Attention-related modulations (ARMs were larger in lateral than medial regions of auditory cortex and appeared to arise primarily in belt and parabelt auditory fields. ARMs lacked tonotopic organization, were unaffected by acoustic parameters, and had distributions that were distinct from those of sensory responses. Unlike the gradual adaptation seen for sensory responses

  15. Auditory evoked fields elicited by spectral, temporal, and spectral-temporal changes in human cerebral cortex

    Directory of Open Access Journals (Sweden)

    Hidehiko eOkamoto

    2012-05-01

    Full Text Available Natural sounds contain complex spectral components, which are temporally modulated as time-varying signals. Recent studies have suggested that the auditory system encodes spectral and temporal sound information differently. However, it remains unresolved how the human brain processes sounds containing both spectral and temporal changes. In the present study, we investigated human auditory evoked responses elicited by spectral, temporal, and spectral-temporal sound changes by means of magnetoencephalography (MEG. The auditory evoked responses elicited by the spectral-temporal change were very similar to those elicited by the spectral change, but those elicited by the temporal change were delayed by 30 – 50 ms and differed from the others in morphology. The results suggest that human brain responses corresponding to spectral sound changes precede those corresponding to temporal sound changes, even when the spectral and temporal changes occur simultaneously.

  16. Functional changes in the human auditory cortex in ageing.

    Directory of Open Access Journals (Sweden)

    Oliver Profant

    Full Text Available Hearing loss, presbycusis, is one of the most common sensory declines in the ageing population. Presbycusis is characterised by a deterioration in the processing of temporal sound features as well as a decline in speech perception, thus indicating a possible central component. With the aim to explore the central component of presbycusis, we studied the function of the auditory cortex by functional MRI in two groups of elderly subjects (>65 years and compared the results with young subjects (auditory cortex. The fMRI showed only minimal activation in response to the 8 kHz stimulation, despite the fact that all subjects heard the stimulus. Both elderly groups showed greater activation in response to acoustical stimuli in the temporal lobes in comparison with young subjects. In addition, activation in the right temporal lobe was more expressed than in the left temporal lobe in both elderly groups, whereas in the young control subjects (YC leftward lateralization was present. No statistically significant differences in activation of the auditory cortex were found between the MP and EP groups. The greater extent of cortical activation in elderly subjects in comparison with young subjects, with an asymmetry towards the right side, may serve as a compensatory mechanism for the impaired processing of auditory information appearing as a consequence of ageing.

  17. Auditory filtering and the discrimination of spectral shapes by normal and hearing-impaired subjects.

    Science.gov (United States)

    Turner, C W; Holte, L A; Relkin, E

    1987-01-01

    A review of the literature suggests that many hearing-impaired patients suffer from sensory deficits in addition to the reduced audibility of speech signals. Poor frequency resolution, or abnormal spread of masking, is a consistently identified deficit in sensorineural hearing loss. Frequency resolution was measured in individual subjects using the input filter pattern paradigm, and the minimum detectable amplitude of a second-formant spectral peak in a spectral-shape discrimination task was also determined for each subject. The two tasks were designed to test the identical frequency regions in each subject. A nearly perfect correlation was found between the degree of frequency resolution as measured by the input filter pattern and performance on the spectral-shape discrimination task. These results suggest that measures of frequency selectivity may offer predictive value as to the degree of impairment that individual hearing-impaired patients may have in perceiving the spectral characteristics of speech, and also lead to suggestions for signal processing strategies to aid these patients.

  18. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearinga)

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-01-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes. PMID:26233047

  19. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing.

    Science.gov (United States)

    Davies-Venn, Evelyn; Nelson, Peggy; Souza, Pamela

    2015-07-01

    Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.

  20. The effect of head-related filtering and ear-specific decoding bias on auditory attention detection

    Science.gov (United States)

    Das, Neetha; Biesmans, Wouter; Bertrand, Alexander; Francart, Tom

    2016-10-01

    Objective. We consider the problem of Auditory Attention Detection (AAD), where the goal is to detect which speaker a person is attending to, in a multi-speaker environment, based on neural activity. This work aims to analyze the influence of head-related filtering and ear-specific decoding on the performance of an AAD algorithm. Approach. We recorded high-density EEG of 16 normal-hearing subjects as they listened to two speech streams while tasked to attend to the speaker in either their left or right ear. The attended ear was switched between trials. The speech stimuli were administered either dichotically, or after filtering using Head-Related Transfer Functions (HRTFs). A spatio-temporal decoder was trained and used to reconstruct the attended stimulus envelope, and the correlations between the reconstructed and the original stimulus envelopes were used to perform AAD, and arrive at a percentage correct score over all trials. Main results. We found that the HRTF condition resulted in significantly higher AAD performance than the dichotic condition. However, speech intelligibility, measured under the same set of conditions, was lower for the HRTF filtered stimuli. We also found that decoders trained and tested for a specific attended ear performed better, compared to decoders trained and tested for both left and right attended ear simultaneously. In the context of the decoders supporting hearing prostheses, the former approach is less realistic, and studies in which each subject always had to attend to the same ear may find over-optimistic results. Significance. This work shows the importance of using realistic binaural listening conditions and training on a balanced set of experimental conditions to obtain results that are more representative for the true AAD performance in practical applications. This research work was carried out at the ESAT and ExpORL Laboratories of KU Leuven, in the frame of KU Leuven Special Research Fund BOF/STG-14-005, OT/14/119 and C14

  1. Positive and negative reinforcement activate human auditory cortex.

    Science.gov (United States)

    Weis, Tina; Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-01-01

    Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent) depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

  2. Positive and negative reinforcement activate human auditory cortex

    Directory of Open Access Journals (Sweden)

    Tina eWeis

    2013-12-01

    Full Text Available Prior studies suggest that reward modulates neural activity in sensory cortices, but less is known about punishment. We used functional magnetic resonance imaging and an auditory discrimination task, where participants had to judge the duration of frequency modulated tones. In one session correct performance resulted in financial gains at the end of the trial, in a second session incorrect performance resulted in financial loss. Incorrect performance in the rewarded as well as correct performance in the punishment condition resulted in a neutral outcome. The size of gains and losses was either low or high (10 or 50 Euro cent depending on the direction of frequency modulation. We analyzed neural activity at the end of the trial, during reinforcement, and found increased neural activity in auditory cortex when gaining a financial reward as compared to gaining no reward and when avoiding financial loss as compared to receiving a financial loss. This was independent on the size of gains and losses. A similar pattern of neural activity for both gaining a reward and avoiding a loss was also seen in right middle temporal gyrus, bilateral insula and pre-supplemental motor area, here however neural activity was lower after correct responses compared to incorrect responses. To summarize, this study shows that the activation of sensory cortices, as previously shown for gaining a reward is also seen during avoiding a loss.

  3. Early influence of auditory stimuli on upper-limb movements in young human infants: an overview

    Directory of Open Access Journals (Sweden)

    Priscilla Augusta Monteiro Ferronato

    2014-09-01

    Full Text Available Given that the auditory system is rather well developed at the end of the third trimester of pregnancy, it is likely that couplings between acoustics and motor activity can be integrated as early as at the beginning of postnatal life. The aim of the present mini-review was to summarize and discuss studies on early auditory-motor integration, focusing particularly on upper-limb movements (one of the most crucial means to interact with the environment in association with auditory stimuli, to develop further understanding of their significance with regard to early infant development. Many studies have investigated the relationship between various infant behaviors (e.g., sucking, visual fixation, head turning and auditory stimuli, and established that human infants can be observed displaying couplings between action and environmental sensory stimulation already from just after birth, clearly indicating a propensity for intentional behavior. Surprisingly few studies, however, have investigated the associations between upper-limb movements and different auditory stimuli in newborns and young infants, infants born at risk for developmental disorders/delays in particular. Findings from studies of early auditory-motor interaction support that the developing integration of sensory and motor systems is a fundamental part of the process guiding the development of goal-directed action in infancy, of great importance for continued motor, perceptual and cognitive development. At-risk infants (e.g., those born preterm may display increasing central auditory processing disorders, negatively affecting early sensory-motor integration, and resulting in long-term consequences on gesturing, language development and social communication. Consequently, there is a need for more studies on such implications

  4. Mapping the Tonotopic Organization in Human Auditory Cortex with Minimally Salient Acoustic Stimulation

    NARCIS (Netherlands)

    Langers, Dave R. M.; van Dijk, Pim

    2012-01-01

    Despite numerous neuroimaging studies, the tonotopic organization in human auditory cortex is not yet unambiguously established. In this functional magnetic resonance imaging study, 20 subjects were presented with low-level task-irrelevant tones to avoid spread of cortical activation. Data-driven an

  5. Speaking modifies voice-evoked activity in the human auditory cortex.

    Science.gov (United States)

    Curio, G; Neuloh, G; Numminen, J; Jousmäki, V; Hari, R

    2000-04-01

    The voice we most often hear is our own, and proper interaction between speaking and hearing is essential for both acquisition and performance of spoken language. Disturbed audiovocal interactions have been implicated in aphasia, stuttering, and schizophrenic voice hallucinations, but paradigms for a noninvasive assessment of auditory self-monitoring of speaking and its possible dysfunctions are rare. Using magnetoencephalograpy we show here that self-uttered syllables transiently activate the speaker's auditory cortex around 100 ms after voice onset. These phasic responses were delayed by 11 ms in the speech-dominant left hemisphere relative to the right, whereas during listening to a replay of the same utterances the response latencies were symmetric. Moreover, the auditory cortices did not react to rare vowel changes interspersed randomly within a series of repetitively spoken vowels, in contrast to regular change-related responses evoked 100-200 ms after replayed rare vowels. Thus, speaking primes the human auditory cortex at a millisecond time scale, dampening and delaying reactions to self-produced "expected" sounds, more prominently in the speech-dominant hemisphere. Such motor-to-sensory priming of early auditory cortex responses during voicing constitutes one element of speech self-monitoring that could be compromised in central speech disorders.

  6. Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.

    Science.gov (United States)

    Zhu, Lin L; Beauchamp, Michael S

    2017-03-08

    Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of

  7. Processing of location and pattern changes of natural sounds in the human auditory cortex.

    Science.gov (United States)

    Altmann, Christian F; Bledowski, Christoph; Wibral, Michael; Kaiser, Jochen

    2007-04-15

    Parallel cortical pathways have been proposed for the processing of auditory pattern and spatial information, respectively. We tested this segregation with human functional magnetic resonance imaging (fMRI) and separate electroencephalographic (EEG) recordings in the same subjects who listened passively to four sequences of repetitive spatial animal vocalizations in an event-related paradigm. Transitions between sequences constituted either a change of auditory pattern, location, or both pattern+location. This procedure allowed us to investigate the cortical correlates of natural auditory "what" and "where" changes independent of differences in the individual stimuli. For pattern changes, we observed significantly increased fMRI responses along the bilateral anterior superior temporal gyrus and superior temporal sulcus, the planum polare, lateral Heschl's gyrus and anterior planum temporale. For location changes, significant increases of fMRI responses were observed in bilateral posterior superior temporal gyrus and planum temporale. An overlap of these two types of changes occurred in the lateral anterior planum temporale and posterior superior temporal gyrus. The analysis of source event-related potentials (ERPs) revealed faster processing of location than pattern changes. Thus, our data suggest that passive processing of auditory spatial and pattern changes is dissociated both temporally and anatomically in the human brain. The predominant role of more anterior aspects of the superior temporal lobe in sound identity processing supports the role of this area as part of the auditory pattern processing stream, while spatial processing of auditory stimuli appears to be mediated by the more posterior parts of the superior temporal lobe.

  8. Rapid Increase in Neural Conduction Time in the Adult Human Auditory Brainstem Following Sudden Unilateral Deafness.

    Science.gov (United States)

    Maslin, M R D; Lloyd, S K; Rutherford, S; Freeman, S; King, A; Moore, D R; Munro, K J

    2015-10-01

    Individuals with sudden unilateral deafness offer a unique opportunity to study plasticity of the binaural auditory system in adult humans. Stimulation of the intact ear results in increased activity in the auditory cortex. However, there are no reports of changes at sub-cortical levels in humans. Therefore, the aim of the present study was to investigate changes in sub-cortical activity immediately before and after the onset of surgically induced unilateral deafness in adult humans. Click-evoked auditory brainstem responses (ABRs) to stimulation of the healthy ear were recorded from ten adults during the course of translabyrinthine surgery for the removal of a unilateral acoustic neuroma. This surgical technique always results in abrupt deafferentation of the affected ear. The results revealed a rapid (within minutes) reduction in latency of wave V (mean pre = 6.55 ms; mean post = 6.15 ms; p < 0.001). A latency reduction was also observed for wave III (mean pre = 4.40 ms; mean post = 4.13 ms; p < 0.001). These reductions in response latency are consistent with functional changes including disinhibition or/and more rapid intra-cellular signalling affecting binaurally sensitive neurons in the central auditory system. The results are highly relevant for improved understanding of putative physiological mechanisms underlying perceptual disorders such as tinnitus and hyperacusis.

  9. A review of the history, development and application of auditory weighting functions in humans and marine mammals.

    Science.gov (United States)

    Houser, Dorian S; Yost, William; Burkard, Robert; Finneran, James J; Reichmuth, Colleen; Mulsow, Jason

    2017-03-01

    This document reviews the history, development, and use of auditory weighting functions for noise impact assessment in humans and marine mammals. Advances from the modern era of electroacoustics, psychophysical studies of loudness, and other related hearing studies are reviewed with respect to the development and application of human auditory weighting functions, particularly A-weighting. The use of auditory weighting functions to assess the effects of environmental noise on humans-such as hearing damage-risk criteria-are presented, as well as lower-level effects such as annoyance and masking. The article also reviews marine mammal auditory weighting functions, the development of which has been fundamentally directed by the objective of predicting and preventing noise-induced hearing loss. Compared to the development of human auditory weighting functions, the development of marine mammal auditory weighting functions have faced additional challenges, including a large number of species that must be considered, a lack of audiometric information on most species, and small sample sizes for nearly all species for which auditory data are available. The review concludes with research recommendations to address data gaps and assumptions underlying marine mammal auditory weighting function design and application.

  10. Consonance and dissonance of musical chords: neural correlates in auditory cortex of monkeys and humans.

    Science.gov (United States)

    Fishman, Y I; Volkov, I O; Noh, M D; Garell, P C; Bakken, H; Arezzo, J C; Howard, M A; Steinschneider, M

    2001-12-01

    Some musical chords sound pleasant, or consonant, while others sound unpleasant, or dissonant. Helmholtz's psychoacoustic theory of consonance and dissonance attributes the perception of dissonance to the sensation of "beats" and "roughness" caused by interactions in the auditory periphery between adjacent partials of complex tones comprising a musical chord. Conversely, consonance is characterized by the relative absence of beats and roughness. Physiological studies in monkeys suggest that roughness may be represented in primary auditory cortex (A1) by oscillatory neuronal ensemble responses phase-locked to the amplitude-modulated temporal envelope of complex sounds. However, it remains unknown whether phase-locked responses also underlie the representation of dissonance in auditory cortex. In the present study, responses evoked by musical chords with varying degrees of consonance and dissonance were recorded in A1 of awake macaques and evaluated using auditory-evoked potential (AEP), multiunit activity (MUA), and current-source density (CSD) techniques. In parallel studies, intracranial AEPs evoked by the same musical chords were recorded directly from the auditory cortex of two human subjects undergoing surgical evaluation for medically intractable epilepsy. Chords were composed of two simultaneous harmonic complex tones. The magnitude of oscillatory phase-locked activity in A1 of the monkey correlates with the perceived dissonance of the musical chords. Responses evoked by dissonant chords, such as minor and major seconds, display oscillations phase-locked to the predicted difference frequencies, whereas responses evoked by consonant chords, such as octaves and perfect fifths, display little or no phase-locked activity. AEPs recorded in Heschl's gyrus display strikingly similar oscillatory patterns to those observed in monkey A1, with dissonant chords eliciting greater phase-locked activity than consonant chords. In contrast to recordings in Heschl's gyrus

  11. Innervation of the Human Cavum Conchae and Auditory Canal: Anatomical Basis for Transcutaneous Auricular Nerve Stimulation

    Science.gov (United States)

    Bermejo, P.; López, M.; Larraya, I.; Chamorro, J.; Cobo, J. L.; Ordóñez, S.

    2017-01-01

    The innocuous transcutaneous stimulation of nerves supplying the outer ear has been demonstrated to be as effective as the invasive direct stimulation of the vagus nerve for the treatment of some neurological and nonneurological disturbances. Thus, the precise knowledge of external ear innervation is of maximal interest for the design of transcutaneous auricular nerve stimulation devices. We analyzed eleven outer ears, and the innervation was assessed by Masson's trichrome staining, immunohistochemistry, or immunofluorescence (neurofilaments, S100 protein, and myelin-basic protein). In both the cavum conchae and the auditory canal, nerve profiles were identified between the cartilage and the skin and out of the cartilage. The density of nerves and of myelinated nerve fibers was higher out of the cartilage and in the auditory canal with respect to the cavum conchae. Moreover, the nerves were more numerous in the superior and posterior-inferior than in the anterior-inferior segments of the auditory canal. The present study established a precise nerve map of the human cavum conchae and the cartilaginous segment of the auditory canal demonstrating regional differences in the pattern of innervation of the human outer ear. These results may provide additional neuroanatomical basis for the accurate design of auricular transcutaneous nerve stimulation devices.

  12. Sensitivity to an Illusion of Sound Location in Human Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Nathan C. Higgins

    2017-05-01

    Full Text Available Human listeners place greater weight on the beginning of a sound compared to the middle or end when determining sound location, creating an auditory illusion known as the Franssen effect. Here, we exploited that effect to test whether human auditory cortex (AC represents the physical vs. perceived spatial features of a sound. We used functional magnetic resonance imaging (fMRI to measure AC responses to sounds that varied in perceived location due to interaural level differences (ILD applied to sound onsets or to the full sound duration. Analysis of hemodynamic responses in AC revealed sensitivity to ILD in both full-cue (veridical and onset-only (illusory lateralized stimuli. Classification analysis revealed regional differences in the sensitivity to onset-only ILDs, where better classification was observed in posterior compared to primary AC. That is, restricting the ILD to sound onset—which alters the physical but not the perceptual nature of the spatial cue—did not eliminate cortical sensitivity to that cue. These results suggest that perceptual representations of auditory space emerge or are refined in higher-order AC regions, supporting the stable perception of auditory space in noisy or reverberant environments and forming the basis of illusions such as the Franssen effect.

  13. Mapping auditory core, lateral belt, and parabelt cortices in the human superior temporal gyrus

    DEFF Research Database (Denmark)

    Sweet, Robert A; Dorph-Petersen, Karl-Anton; Lewis, David A

    2005-01-01

    the location of the lateral belt and parabelt with respect to gross anatomical landmarks. Architectonic criteria for the core, lateral belt, and parabelt were readily adapted from monkey to human. Additionally, we found evidence for an architectonic subdivision within the parabelt, present in both species......The goal of the present study was to determine whether the architectonic criteria used to identify the core, lateral belt, and parabelt auditory cortices in macaque monkeys (Macaca fascicularis) could be used to identify homologous regions in humans (Homo sapiens). Current evidence indicates...... that auditory cortex in humans, as in monkeys, is located on the superior temporal gyrus (STG), and is functionally and structurally altered in illnesses such as schizophrenia and Alzheimer's disease. In this study, we used serial sets of adjacent sections processed for Nissl substance, acetylcholinesterase...

  14. Broadened population-level frequency tuning in human auditory cortex of portable music player users.

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    Full Text Available Nowadays, many people use portable players to enrich their daily life with enjoyable music. However, in noisy environments, the player volume is often set to extremely high levels in order to drown out the intense ambient noise and satisfy the appetite for music. Extensive and inappropriate usage of portable music players might cause subtle damages in the auditory system, which are not behaviorally detectable in an early stage of the hearing impairment progress. Here, by means of magnetoencephalography, we objectively examined detrimental effects of portable music player misusage on the population-level frequency tuning in the human auditory cortex. We compared two groups of young people: one group had listened to music with portable music players intensively for a long period of time, while the other group had not. Both groups performed equally and normally in standard audiological examinations (pure tone audiogram, speech test, and hearing-in-noise test. However, the objective magnetoencephalographic data demonstrated that the population-level frequency tuning in the auditory cortex of the portable music player users was significantly broadened compared to the non-users, when attention was distracted from the auditory modality; this group difference vanished when attention was directed to the auditory modality. Our conclusion is that extensive and inadequate usage of portable music players could cause subtle damages, which standard behavioral audiometric measures fail to detect in an early stage. However, these damages could lead to future irreversible hearing disorders, which would have a huge negative impact on the quality of life of those affected, and the society as a whole.

  15. Ubiquitous crossmodal Stochastic Resonance in humans: auditory noise facilitates tactile, visual and proprioceptive sensations.

    Directory of Open Access Journals (Sweden)

    Eduardo Lugo

    Full Text Available BACKGROUND: Stochastic resonance is a nonlinear phenomenon whereby the addition of noise can improve the detection of weak stimuli. An optimal amount of added noise results in the maximum enhancement, whereas further increases in noise intensity only degrade detection or information content. The phenomenon does not occur in linear systems, where the addition of noise to either the system or the stimulus only degrades the signal quality. Stochastic Resonance (SR has been extensively studied in different physical systems. It has been extended to human sensory systems where it can be classified as unimodal, central, behavioral and recently crossmodal. However what has not been explored is the extension of this crossmodal SR in humans. For instance, if under the same auditory noise conditions the crossmodal SR persists among different sensory systems. METHODOLOGY/PRINCIPAL FINDINGS: Using physiological and psychophysical techniques we demonstrate that the same auditory noise can enhance the sensitivity of tactile, visual and propioceptive system responses to weak signals. Specifically, we show that the effective auditory noise significantly increased tactile sensations of the finger, decreased luminance and contrast visual thresholds and significantly changed EMG recordings of the leg muscles during posture maintenance. CONCLUSIONS/SIGNIFICANCE: We conclude that crossmodal SR is a ubiquitous phenomenon in humans that can be interpreted within an energy and frequency model of multisensory neurons spontaneous activity. Initially the energy and frequency content of the multisensory neurons' activity (supplied by the weak signals is not enough to be detected but when the auditory noise enters the brain, it generates a general activation among multisensory neurons of different regions, modifying their original activity. The result is an integrated activation that promotes sensitivity transitions and the signals are then perceived. A physiologically

  16. Auditory event-related response in visual cortex modulates subsequent visual responses in humans.

    Science.gov (United States)

    Naue, Nicole; Rach, Stefan; Strüber, Daniel; Huster, Rene J; Zaehle, Tino; Körner, Ursula; Herrmann, Christoph S

    2011-05-25

    Growing evidence from electrophysiological data in animal and human studies suggests that multisensory interaction is not exclusively a higher-order process, but also takes place in primary sensory cortices. Such early multisensory interaction is thought to be mediated by means of phase resetting. The presentation of a stimulus to one sensory modality resets the phase of ongoing oscillations in another modality such that processing in the latter modality is modulated. In humans, evidence for such a mechanism is still sparse. In the current study, the influence of an auditory stimulus on visual processing was investigated by measuring the electroencephalogram (EEG) and behavioral responses of humans to visual, auditory, and audiovisual stimulation with varying stimulus-onset asynchrony (SOA). We observed three distinct oscillatory EEG responses in our data. An initial gamma-band response around 50 Hz was followed by a beta-band response around 25 Hz, and a theta response around 6 Hz. The latter was enhanced in response to cross-modal stimuli as compared to either unimodal stimuli. Interestingly, the beta response to unimodal auditory stimuli was dominant in electrodes over visual areas. The SOA between auditory and visual stimuli--albeit not consciously perceived--had a modulatory impact on the multisensory evoked beta-band responses; i.e., the amplitude depended on SOA in a sinusoidal fashion, suggesting a phase reset. These findings further support the notion that parameters of brain oscillations such as amplitude and phase are essential predictors of subsequent brain responses and might be one of the mechanisms underlying multisensory integration.

  17. Plasticity of the human auditory cortex related to musical training.

    Science.gov (United States)

    Pantev, Christo; Herholz, Sibylle C

    2011-11-01

    During the last decades music neuroscience has become a rapidly growing field within the area of neuroscience. Music is particularly well suited for studying neuronal plasticity in the human brain because musical training is more complex and multimodal than most other daily life activities, and because prospective and professional musicians usually pursue the training with high and long-lasting commitment. Therefore, music has increasingly been used as a tool for the investigation of human cognition and its underlying brain mechanisms. Music relates to many brain functions like perception, action, cognition, emotion, learning and memory and therefore music is an ideal tool to investigate how the human brain is working and how different brain functions interact. Novel findings have been obtained in the field of induced cortical plasticity by musical training. The positive effects, which music in its various forms has in the healthy human brain are not only important in the framework of basic neuroscience, but they also will strongly affect the practices in neuro-rehabilitation. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Altered temporal dynamics of neural adaptation in the aging human auditory cortex.

    Science.gov (United States)

    Herrmann, Björn; Henry, Molly J; Johnsrude, Ingrid S; Obleser, Jonas

    2016-09-01

    Neural response adaptation plays an important role in perception and cognition. Here, we used electroencephalography to investigate how aging affects the temporal dynamics of neural adaptation in human auditory cortex. Younger (18-31 years) and older (51-70 years) normal hearing adults listened to tone sequences with varying onset-to-onset intervals. Our results show long-lasting neural adaptation such that the response to a particular tone is a nonlinear function of the extended temporal history of sound events. Most important, aging is associated with multiple changes in auditory cortex; older adults exhibit larger and less variable response magnitudes, a larger dynamic response range, and a reduced sensitivity to temporal context. Computational modeling suggests that reduced adaptation recovery times underlie these changes in the aging auditory cortex and that the extended temporal stimulation has less influence on the neural response to the current sound in older compared with younger individuals. Our human electroencephalography results critically narrow the gap to animal electrophysiology work suggesting a compensatory release from cortical inhibition accompanying hearing loss and aging.

  19. Tonotopic representation of missing fundamental complex sounds in the human auditory cortex.

    Science.gov (United States)

    Fujioka, Takako; Ross, Bernhard; Okamoto, Hidehiko; Takeshima, Yasuyuki; Kakigi, Ryusuke; Pantev, Christo

    2003-07-01

    The N1m component of the auditory evoked magnetic field in response to tones and complex sounds was examined in order to clarify whether the tonotopic representation in the human secondary auditory cortex is based on perceived pitch or the physical frequency spectrum of the sound. The investigated stimulus parameters were the fundamental frequencies (F0 = 250, 500 and 1000 Hz), the spectral composition of the higher harmonics of the missing fundamental sounds (2nd to 5th, 6th to 9th and 10th to 13th harmonic) and the frequencies of pure tones corresponding to F0 and to the lowest component of each complex sound. Tonotopic gradients showed that high frequencies were more medially located than low frequencies for the pure tones and for the centre frequency of the complex tones. Furthermore, in the superior-inferior direction, the tonotopic gradients were different between pure tones and complex sounds. The results were interpreted as reflecting different processing in the auditory cortex for pure tones and complex sounds. This hypothesis was supported by the result of evoked responses to complex sounds having longer latencies. A more pronounced tonotopic representation in the right hemisphere gave evidence for right hemispheric dominance in spectral processing.

  20. Lipreading and covert speech production similarly modulate human auditory-cortex responses to pure tones.

    Science.gov (United States)

    Kauramäki, Jaakko; Jääskeläinen, Iiro P; Hari, Riitta; Möttönen, Riikka; Rauschecker, Josef P; Sams, Mikko

    2010-01-27

    Watching the lips of a speaker enhances speech perception. At the same time, the 100 ms response to speech sounds is suppressed in the observer's auditory cortex. Here, we used whole-scalp 306-channel magnetoencephalography (MEG) to study whether lipreading modulates human auditory processing already at the level of the most elementary sound features, i.e., pure tones. We further envisioned the temporal dynamics of the suppression to tell whether the effect is driven by top-down influences. Nineteen subjects were presented with 50 ms tones spanning six octaves (125-8000 Hz) (1) during "lipreading," i.e., when they watched video clips of silent articulations of Finnish vowels /a/, /i/, /o/, and /y/, and reacted to vowels presented twice in a row; (2) during a visual control task; (3) during a still-face passive control condition; and (4) in a separate experiment with a subset of nine subjects, during covert production of the same vowels. Auditory-cortex 100 ms responses (N100m) were equally suppressed in the lipreading and covert-speech-production tasks compared with the visual control and baseline tasks; the effects involved all frequencies and were most prominent in the left hemisphere. Responses to tones presented at different times with respect to the onset of the visual articulation showed significantly increased N100m suppression immediately after the articulatory gesture. These findings suggest that the lipreading-related suppression in the auditory cortex is caused by top-down influences, possibly by an efference copy from the speech-production system, generated during both own speech and lipreading.

  1. Effects of pre- and postnatal exposure to the UV-filter octyl methoxycinnamate (OMC) on the reproductive, auditory and neurological development of rat offspring.

    Science.gov (United States)

    Axelstad, Marta; Boberg, Julie; Hougaard, Karin Sørig; Christiansen, Sofie; Jacobsen, Pernille Rosenskjold; Mandrup, Karen Riiber; Nellemann, Christine; Lund, Søren Peter; Hass, Ulla

    2011-02-01

    Octyl Methoxycinnamate (OMC) is a frequently used UV-filter in sunscreens and other cosmetics. The aim of the present study was to address the potential endocrine disrupting properties of OMC, and to investigate how OMC induced changes in thyroid hormone levels would be related to the neurological development of treated offspring. Groups of 14-18 pregnant Wistar rats were dosed with 0, 500, 750 or 1000 mg OMC/kg bw/day during gestation and lactation. Serum thyroxine (T(4)), testosterone, estradiol and progesterone levels were measured in dams and offspring. Anogenital distance, nipple retention, postnatal growth and timing of sexual maturation were assessed. On postnatal day 16, gene expression in prostate and testes, and weight and histopathology of the thyroid gland, liver, adrenals, prostate, testes, epididymis and ovaries were measured. After weaning, offspring were evaluated in a battery of behavioral and neurophysiological tests, including tests of activity, startle response, cognitive and auditory function. In adult animals, reproductive organ weights and semen quality were investigated. Thyroxine (T(4)) levels showed a very marked decrease during the dosing period in all dosed dams, but were less severely affected in the offspring. On postnatal day 16, high dose male offspring showed reduced relative prostate and testis weights, and a dose-dependent decrease in testosterone levels. In OMC exposed female offspring, motor activity levels were decreased, while low and high dose males showed improved spatial learning abilities. The observed behavioral changes were probably not mediated solely by early T(4) deficiencies, as the observed effects differed from those seen in other studies of developmental hypothyroxinemia. At eight months of age, sperm counts were reduced in all three OMC-dosed groups, and prostate weights were reduced in the highest dose group. Taken together, these results indicate that perinatal OMC-exposure can affect both the reproductive and

  2. Hierarchical organization of speech perception in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Colin eHumphries

    2014-12-01

    Full Text Available Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV syllable and (b the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG, activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.

  3. Social and emotional values of sounds influence human (Homo sapiens) and non-human primate (Cercopithecus campbelli) auditory laterality.

    Science.gov (United States)

    Basile, Muriel; Lemasson, Alban; Blois-Heulin, Catherine

    2009-07-17

    The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity) and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech) emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom) and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation). We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03). Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  4. Social and emotional values of sounds influence human (Homo sapiens and non-human primate (Cercopithecus campbelli auditory laterality.

    Directory of Open Access Journals (Sweden)

    Muriel Basile

    Full Text Available The last decades evidenced auditory laterality in vertebrates, offering new important insights for the understanding of the origin of human language. Factors such as the social (e.g. specificity, familiarity and emotional value of sounds have been proved to influence hemispheric specialization. However, little is known about the crossed effect of these two factors in animals. In addition, human-animal comparative studies, using the same methodology, are rare. In our study, we adapted the head turn paradigm, a widely used non invasive method, on 8-9-year-old schoolgirls and on adult female Campbell's monkeys, by focusing on head and/or eye orientations in response to sound playbacks. We broadcast communicative signals (monkeys: calls, humans: speech emitted by familiar individuals presenting distinct degrees of social value (female monkeys: conspecific group members vs heterospecific neighbours, human girls: from the same vs different classroom and emotional value (monkeys: contact vs threat calls; humans: friendly vs aggressive intonation. We evidenced a crossed-categorical effect of social and emotional values in both species since only "negative" voices from same class/group members elicited a significant auditory laterality (Wilcoxon tests: monkeys, T = 0 p = 0.03; girls: T = 4.5 p = 0.03. Moreover, we found differences between species as a left and right hemisphere preference was found respectively in humans and monkeys. Furthermore while monkeys almost exclusively responded by turning their head, girls sometimes also just moved their eyes. This study supports theories defending differential roles played by the two hemispheres in primates' auditory laterality and evidenced that more systematic species comparisons are needed before raising evolutionary scenario. Moreover, the choice of sound stimuli and behavioural measures in such studies should be the focus of careful attention.

  5. Neural coding and perception of pitch in the normal and impaired human auditory system

    DEFF Research Database (Denmark)

    Santurette, Sébastien

    2011-01-01

    Pitch is an important attribute of hearing that allows us to perceive the musical quality of sounds. Besides music perception, pitch contributes to speech communication, auditory grouping, and perceptual segregation of sound sources. In this work, several aspects of pitch perception in humans were...... investigated using psychophysical methods. First, hearing loss was found to affect the perception of binaural pitch, a pitch sensation created by the binaural interaction of noise stimuli. Specifically, listeners without binaural pitch sensation showed signs of retrocochlear disorders. Despite adverse effects...

  6. Auditory Contagious Yawning in Humans: An Investigation into Affiliation and Status Effects

    Directory of Open Access Journals (Sweden)

    Jorg J.M. Massen

    2015-11-01

    Full Text Available While comparative research on contagious yawning has grown substantially in the past few years, both the interpersonal factors influencing this response and the sensory modalities involved in its activation in humans remain relatively unknown. Extending upon previous studies showing various in-group and status effects in non-human great apes, we performed an initial study to investigate how the political affiliation (Democrat versus Republican and status (high versus low of target stimuli influences auditory contagious yawning, as well as the urge to yawn, in humans. Self-report responses and a subset of video recordings were analyzed from 118 undergraduate students in the US following exposure to either breathing (control or yawning (experimental vocalizations paired with images of former US Presidents (high status and their respective Cabinet Secretaries of Commerce (low status. The overall results validate the use of auditory stimuli to prompt yawn contagion, with greater response in the experimental than the control condition. There was also a negative effect of political status on self-reported yawning and the self-reported urge to yawn irrespective of the condition. In contrast, we found no evidence for a political affiliation bias in this response. These preliminary findings are discussed in terms of the existing comparative evidence, though we highlight limitations in the current investigation and we provide suggestions for future research in this area.

  7. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA......, 1982, 416-425]. When a “classical” gammatone filterbank was applied within this spectrum-based model, the model largely underestimated human performance at high signal frequencies. However, this limitation could be resolved by employing an auditory filterbank with narrower filters. This novel...

  8. Connectivity in the human brain dissociates entropy and complexity of auditory inputs.

    Science.gov (United States)

    Nastase, Samuel A; Iacovella, Vittorio; Davis, Ben; Hasson, Uri

    2015-03-01

    Complex systems are described according to two central dimensions: (a) the randomness of their output, quantified via entropy; and (b) their complexity, which reflects the organization of a system's generators. Whereas some approaches hold that complexity can be reduced to uncertainty or entropy, an axiom of complexity science is that signals with very high or very low entropy are generated by relatively non-complex systems, while complex systems typically generate outputs with entropy peaking between these two extremes. In understanding their environment, individuals would benefit from coding for both input entropy and complexity; entropy indexes uncertainty and can inform probabilistic coding strategies, whereas complexity reflects a concise and abstract representation of the underlying environmental configuration, which can serve independent purposes, e.g., as a template for generalization and rapid comparisons between environments. Using functional neuroimaging, we demonstrate that, in response to passively processed auditory inputs, functional integration patterns in the human brain track both the entropy and complexity of the auditory signal. Connectivity between several brain regions scaled monotonically with input entropy, suggesting sensitivity to uncertainty, whereas connectivity between other regions tracked entropy in a convex manner consistent with sensitivity to input complexity. These findings suggest that the human brain simultaneously tracks the uncertainty of sensory data and effectively models their environmental generators.

  9. Rate and adaptation effects on the auditory evoked brainstem response in human newborns and adults.

    Science.gov (United States)

    Lasky, R E

    1997-09-01

    Auditory evoked brainstem response (ABR) latencies increased and amplitudes decreased with increasing stimulus repetition rate for human newborns and adults. The wave V latency increases were larger for newborns than adults. The wave V amplitude decreases were smaller for newborns than adults. These differences could not be explained by developmental differences in frequency responsivity. The transition from the unadapted to the fully adapted response was less rapid in newborns than adults at short (= 10 ms) inter stimulus intervals (ISIs). At longer ISIs (= 20 ms) there were no developmental differences in the transition to the fully adapted response. The newborn transition occurred in a two stage process. The rapid initial stage observed in adults and newborns was complete by about 40 ms. A second slower stage was observed only in newborns although it has been observed in adults in other studies (Weatherby and Hecox, 1982; Lightfoot, 1991; Lasky et al., 1996). These effects were replicated at different stimulus intensities. After the termination of stimulation the return to the wave V unadapted response took nearly 500 ms in newborns. Neither the newborn nor the adult data can be explained by forward masking of one click on the next click. These results indicate human developmental differences in adaptation to repetitive auditory stimulation at the level of the brainstem.

  10. Dynamic Range Adaptation to Spectral Stimulus Statistics in Human Auditory Cortex

    Science.gov (United States)

    Schlichting, Nadine; Obleser, Jonas

    2014-01-01

    Classically, neural adaptation refers to a reduction in response magnitude by sustained stimulation. In human electroencephalography (EEG), neural adaptation has been measured, for example, as frequency-specific response decrease by previous stimulation. Only recently and mainly based on animal studies, it has been suggested that statistical properties in the stimulation lead to adjustments of neural sensitivity and affect neural response adaptation. However, it is thus far unresolved which statistical parameters in the acoustic stimulation spectrum affect frequency-specific neural adaptation, and on which time scales the effects take place. The present human EEG study investigated the potential influence of the overall spectral range as well as the spectral spacing of the acoustic stimulation spectrum on frequency-specific neural adaptation. Tones randomly varying in frequency were presented passively and computational modeling of frequency-specific neural adaptation was used. Frequency-specific adaptation was observed for all presentation conditions. Critically, however, the spread of adaptation (i.e., degree of coadaptation) in tonotopically organized regions of auditory cortex changed with the spectral range of the acoustic stimulation. In contrast, spectral spacing did not affect the spread of frequency-specific adaptation. Therefore, changes in neural sensitivity in auditory cortex are directly coupled to the overall spectral range of the acoustic stimulation, which suggests that neural adjustments to spectral stimulus statistics occur over a time scale of multiple seconds. PMID:24381293

  11. Short GSM mobile phone exposure does not alter human auditory brainstem response

    Directory of Open Access Journals (Sweden)

    Thuróczy György

    2007-11-01

    Full Text Available Abstract Background There are about 1.6 billion GSM cellular phones in use throughout the world today. Numerous papers have reported various biological effects in humans exposed to electromagnetic fields emitted by mobile phones. The aim of the present study was to advance our understanding of potential adverse effects of the GSM mobile phones on the human hearing system. Methods Auditory Brainstem Response (ABR was recorded with three non-polarizing Ag-AgCl scalp electrodes in thirty young and healthy volunteers (age 18–26 years with normal hearing. ABR data were collected before, and immediately after a 10 minute exposure to 900 MHz pulsed electromagnetic field (EMF emitted by a commercial Nokia 6310 mobile phone. Fifteen subjects were exposed to genuine EMF and fifteen to sham EMF in a double blind and counterbalanced order. Possible effects of irradiation was analyzed by comparing the latency of ABR waves I, III and V before and after genuine/sham EMF exposure. Results Paired sample t-test was conducted for statistical analysis. Results revealed no significant differences in the latency of ABR waves I, III and V before and after 10 minutes of genuine/sham EMF exposure. Conclusion The present results suggest that, in our experimental conditions, a single 10 minute exposure of 900 MHz EMF emitted by a commercial mobile phone does not produce measurable immediate effects in the latency of auditory brainstem waves I, III and V.

  12. Particle Filter with Binary Gaussian Weighting and Support Vector Machine for Human Pose Interpretation

    OpenAIRE

    Indah Agustien; Muhammad Rahmat Widyanto; Sukmawati Endah; Tarzan Basaruddin

    2010-01-01

    Human pose interpretation using Particle filter with Binary Gaussian Weighting and Support Vector Machine is proposed. In the proposed system, Particle filter is used to track human object, then this human object is skeletonized using thinning algorithm and classified using Support Vector Machine. The classification is to identify human pose, whether a normal or abnormal behavior. Here Particle filter is modified through weight calculation using Gaussiandistribution to reduce t...

  13. Tracing the emergence of categorical speech perception in the human auditory system.

    Science.gov (United States)

    Bidelman, Gavin M; Moreno, Sylvain; Alain, Claude

    2013-10-01

    Speech perception requires the effortless mapping from smooth, seemingly continuous changes in sound features into discrete perceptual units, a conversion exemplified in the phenomenon of categorical perception. Explaining how/when the human brain performs this acoustic-phonetic transformation remains an elusive problem in current models and theories of speech perception. In previous attempts to decipher the neural basis of speech perception, it is often unclear whether the alleged brain correlates reflect an underlying percept or merely changes in neural activity that covary with parameters of the stimulus. Here, we recorded neuroelectric activity generated at both cortical and subcortical levels of the auditory pathway elicited by a speech vowel continuum whose percept varied categorically from /u/ to /a/. This integrative approach allows us to characterize how various auditory structures code, transform, and ultimately render the perception of speech material as well as dissociate brain responses reflecting changes in stimulus acoustics from those that index true internalized percepts. We find that activity from the brainstem mirrors properties of the speech waveform with remarkable fidelity, reflecting progressive changes in speech acoustics but not the discrete phonetic classes reported behaviorally. In comparison, patterns of late cortical evoked activity contain information reflecting distinct perceptual categories and predict the abstract phonetic speech boundaries heard by listeners. Our findings demonstrate a critical transformation in neural speech representations between brainstem and early auditory cortex analogous to an acoustic-phonetic mapping necessary to generate categorical speech percepts. Analytic modeling demonstrates that a simple nonlinearity accounts for the transformation between early (subcortical) brain activity and subsequent cortical/behavioral responses to speech (>150-200 ms) thereby describing a plausible mechanism by which the

  14. Sensitivity of the human auditory cortex to acoustic degradation of speech and non-speech sounds

    Directory of Open Access Journals (Sweden)

    Tiitinen Hannu

    2010-02-01

    Full Text Available Abstract Background Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG. Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/, complex non-speech stimuli (a composite of five sinusoidals, and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs were measured in the left and right hemisphere of sixteen healthy subjects. Results We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. Conclusions We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.

  15. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    Directory of Open Access Journals (Sweden)

    Buddhamas eKriengwatana

    2015-08-01

    Full Text Available Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still not fully understood. In this study we compare the ability of humans and zebra finches to categorize vowels despite speaker variation in speech in order to test the hypothesis that accommodating speaker and gender differences in isolated vowels can be achieved without prior experience with speaker-related variability. Using a behavioural Go/No-go task and identical stimuli, we compared Australian English adults’ (naïve to Dutch and zebra finches’ (naïve to human speech ability to categorize /ɪ/ and /ɛ/ vowels of an novel Dutch speaker after learning to discriminate those vowels from only one other speaker. Experiment 1 and 2 presented vowels of two speakers interspersed or blocked, respectively. Results demonstrate that categorization of vowels is possible without prior exposure to speaker-related variability in speech for zebra finches, and in non-native vowel categories for humans. Therefore, this study is the first to provide evidence for what might be a species-shared auditory bias that may supersede speaker-related information during vowel categorization. It additionally provides behavioural evidence contradicting a prior hypothesis that accommodation of speaker differences is achieved via the use of formant ratios. Therefore, investigations of alternative accounts of vowel normalization that incorporate the possibility of an auditory bias for disregarding inter-speaker variability are warranted.

  16. Effect of Bluetooth headset and mobile phone electromagnetic fields on the human auditory nerve.

    Science.gov (United States)

    Mandalà, Marco; Colletti, Vittorio; Sacchetto, Luca; Manganotti, Paolo; Ramat, Stefano; Marcocci, Alessandro; Colletti, Liliana

    2014-01-01

    The possibility that long-term mobile phone use increases the incidence of astrocytoma, glioma and acoustic neuroma has been investigated in several studies. Recently, our group showed that direct exposure (in a surgical setting) to cell phone electromagnetic fields (EMFs) induces deterioration of auditory evoked cochlear nerve compound action potential (CNAP) in humans. To verify whether the use of Bluetooth devices reduces these effects, we conducted the present study with the same experimental protocol. Randomized trial. Twelve patients underwent retrosigmoid vestibular neurectomy to treat definite unilateral Ménière's disease while being monitored with acoustically evoked CNAPs to assess direct mobile phone exposure or alternatively the EMF effects of Bluetooth headsets. We found no short-term effects of Bluetooth EMFs on the auditory nervous structures, whereas direct mobile phone EMF exposure confirmed a significant decrease in CNAPs amplitude and an increase in latency in all subjects. The outcomes of the present study show that, contrary to the finding that the latency and amplitude of CNAPs are very sensitive to EMFs produced by the tested mobile phone, the EMFs produced by a common Bluetooth device do not induce any significant change in cochlear nerve activity. The conditions of exposure, therefore, differ from those of everyday life, in which various biological tissues may reduce the EMF affecting the cochlear nerve. Nevertheless, these novel findings may have important safety implications. © 2013 The American Laryngological, Rhinological and Otological Society, Inc.

  17. Segregation of vowels and consonants in human auditory cortex: Evidence for distributed hierarchical organization

    Directory of Open Access Journals (Sweden)

    Jonas eObleser

    2010-12-01

    Full Text Available The speech signal consists of a continuous stream of consonants and vowels, which must be de– and encoded in human auditory cortex to ensure the robust recognition and categorization of speech sounds. We used small-voxel functional magnetic resonance imaging (fMRI to study information encoded in local brain activation patterns elicited by consonant-vowel syllables, and by a control set of noise bursts.First, activation of anterior–lateral superior temporal cortex was seen when controlling for unspecific acoustic processing (syllables versus band-passed noises, in a classic subtraction-based design. Second, a classifier algorithm, which was trained and tested iteratively on data from all subjects to discriminate local brain activation patterns, yielded separations of cortical patches discriminative of vowel category versus patches discriminative of stop-consonant category across the entire superior temporal cortex, yet with regional differences in average classification accuracy. Overlap (voxels correctly classifying both speech sound categories was surprisingly sparse. Third, lending further plausibility to the results, classification of speech–noise differences was generally superior to speech–speech classifications, with the notable exception of a left anterior region, where speech–speech classification accuracies were significantly better.These data demonstrate that acoustic-phonetic features are encoded in complex yet sparsely overlapping local patterns of neural activity distributed hierarchically across different regions of the auditory cortex. The redundancy apparent in these multiple patterns may partly explain the robustness of phonemic representations.

  18. Auditory-model-based Feature Extraction Method for Mechanical Faults Diagnosis

    Institute of Scientific and Technical Information of China (English)

    LI Yungong; ZHANG Jinping; DAI Li; ZHANG Zhanyi; LIU Jie

    2010-01-01

    It is well known that the human auditory system possesses remarkable capabilities to analyze and identify signals. Therefore, it would be significant to build an auditory model based on the mechanism of human auditory systems, which may improve the effects of mechanical signal analysis and enrich the methods of mechanical faults features extraction. However the existing methods are all based on explicit senses of mathematics or physics, and have some shortages on distinguishing different faults, stability, and suppressing the disturbance noise, etc. For the purpose of improving the performances of the work of feature extraction, an auditory model, early auditory(EA) model, is introduced for the first time. This auditory model transforms time domain signal into auditory spectrum via bandpass filtering, nonlinear compressing, and lateral inhibiting by simulating the principle of the human auditory system. The EA model is developed with the Gammatone filterbank as the basilar membrane. According to the characteristics of vibration signals, a method is proposed for determining the parameter of inner hair cells model of EA model. The performance of EA model is evaluated through experiments on four rotor faults, including misalignment, rotor-to-stator rubbing, oil film whirl, and pedestal looseness. The results show that the auditory spectrum, output of EA model, can effectively distinguish different faults with satisfactory stability and has the ability to suppress the disturbance noise. Then, it is feasible to apply auditory model, as a new method, to the feature extraction for mechanical faults diagnosis with effect.

  19. Particle Filter with Gaussian Weighting for Human Tracking

    National Research Council Canada - National Science Library

    T. Basaruddin; M. Rahmat Widyanto; Indah Agustien Siradjuddin

    2012-01-01

    .... There are two main stages in this method, i.e. prediction and update. The difference between the conventional particle filter and particle filter with Gaussian weighting is in the update Stage...

  20. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  1. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  2. Motor-Auditory-Visual Integration: The Role of the Human Mirror Neuron System in Communication and Communication Disorders

    Science.gov (United States)

    Le Bel, Ronald M.; Pineda, Jaime A.; Sharma, Anu

    2009-01-01

    The mirror neuron system (MNS) is a trimodal system composed of neuronal populations that respond to motor, visual, and auditory stimulation, such as when an action is performed, observed, heard or read about. In humans, the MNS has been identified using neuroimaging techniques (such as fMRI and mu suppression in the EEG). It reflects an…

  3. Filtering Data Based on Human-Inspired Forgetting.

    Science.gov (United States)

    Freedman, S T; Adams, J A

    2011-12-01

    Robots are frequently presented with vast arrays of diverse data. Unfortunately, perfect memory and recall provides a mixed blessing. While flawless recollection of episodic data allows increased reasoning, photographic memory can hinder a robot's ability to operate in real-time dynamic environments. Human-inspired forgetting methods may enable robotic systems to rid themselves of out-dated, irrelevant, and erroneous data. This paper presents the use of human-inspired forgetting to act as a filter, removing unnecessary, erroneous, and out-of-date information. The novel ActSimple forgetting algorithm has been developed specifically to provide effective forgetting capabilities to robotic systems. This paper presents the ActSimple algorithm and how it was optimized and tested in a WiFi signal strength estimation task. The results generated by real-world testing suggest that human-inspired forgetting is an effective means of improving the ability of mobile robots to move and operate within complex and dynamic environments.

  4. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Directory of Open Access Journals (Sweden)

    Christo ePantev

    2012-06-01

    Full Text Available Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG. Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for three hours inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus - tailor-made notched music training (TMNMT. By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs were significantly reduced after training. The subsequent short-term (5 days training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies > 8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy are planned. A goal is to transfer this novel, completely non-invasive, and low-cost treatment approach for tonal tinnitus into routine clinical practice.

  5. Auditory-like filterbank: An optimal speech processor for efficient human speech communication

    Indian Academy of Sciences (India)

    Prasanta Kumar Ghosh; Louis M Goldstein; Shrikanth S Narayanan

    2011-10-01

    The transmitter and the receiver in a communication system have to be designed optimally with respect to one another to ensure reliable and efficient communication. Following this principle, we derive an optimal filterbank for processing speech signal in the listener’s auditory system (receiver), so that maximum information about the talker’s (transmitter) message can be obtained from the filterbank output, leading to efficient communication between the talker and the listener. We consider speech data of 45 talkers from three different languages for designing optimal filterbanks separately for each of them. We find that the computationally derived optimal filterbanks are similar to the empirically established auditory (cochlear) filterbank in the human ear. We also find that the output of the empirically established auditory filterbank provides more than 90% of the maximum information about the talker’s message provided by the output of the optimal filterbank. Our experimental findings suggest that the auditory filterbank in human ear functions as a near-optimal speech processor for achieving efficient speech communication between humans.

  6. Mechanism of auditory hypersensitivity in human autism using autism model rats.

    Science.gov (United States)

    Ida-Eto, Michiru; Hara, Nao; Ohkawara, Takeshi; Narita, Masaaki

    2017-04-01

    Auditory hypersensitivity is one of the major complications in autism spectrum disorder. The aim of this study was to investigate whether the auditory brain center is affected in autism model rats. Autism model rats were prepared by prenatal exposure to thalidomide on embryonic day 9 and 10 in pregnant rats. The superior olivary complex (SOC), a complex of auditory nuclei, was immunostained with anti-calbindin d28k antibody at postnatal day 50. In autism model rats, SOC immunoreactivity was markedly decreased. Strength of immunostaining of SOC auditory fibers was also weak in autism model rats. Surprisingly, the size of the medial nucleus of trapezoid body, a nucleus exerting inhibitory function in SOC, was significantly decreased in autism model rats. Auditory hypersensitivity may be, in part, due to impairment of inhibitory processing by the auditory brain center. © 2016 Japan Pediatric Society.

  7. Visual activation and audiovisual interactions in the auditory cortex during speech perception: intracranial recordings in humans.

    Science.gov (United States)

    Besle, Julien; Fischer, Catherine; Bidet-Caulet, Aurélie; Lecaignard, Francoise; Bertrand, Olivier; Giard, Marie-Hélène

    2008-12-24

    Hemodynamic studies have shown that the auditory cortex can be activated by visual lip movements and is a site of interactions between auditory and visual speech processing. However, they provide no information about the chronology and mechanisms of these cross-modal processes. We recorded intracranial event-related potentials to auditory, visual, and bimodal speech syllables from depth electrodes implanted in the temporal lobe of 10 epileptic patients (altogether 932 contacts). We found that lip movements activate secondary auditory areas, very shortly (approximately equal to 10 ms) after the activation of the visual motion area MT/V5. After this putatively feedforward visual activation of the auditory cortex, audiovisual interactions took place in the secondary auditory cortex, from 30 ms after sound onset and before any activity in the polymodal areas. Audiovisual interactions in the auditory cortex, as estimated in a linear model, consisted both of a total suppression of the visual response to lipreading and a decrease of the auditory responses to the speech sound in the bimodal condition compared with unimodal conditions. These findings demonstrate that audiovisual speech integration does not respect the classical hierarchy from sensory-specific to associative cortical areas, but rather engages multiple cross-modal mechanisms at the first stages of nonprimary auditory cortex activation.

  8. Human motion classification using a particle filter approach: multiple model particle filtering applied to the micro-Doppler spectrum

    NARCIS (Netherlands)

    Groot, S.; Harmanny, R.; Driessen, H.; Yarovoy, A.

    2013-01-01

    In this article, a novel motion model-based particle filter implementation is proposed to classify human motion and to estimate key state variables, such as motion type, i.e. running or walking, and the subject’s height. Micro-Doppler spectrum is used as the observable information. The system and

  9. Particle Filter with Gaussian Weighting for Human Tracking

    OpenAIRE

    T. Basaruddin; M. Rahmat Widyanto; Indah Agustien Siradjuddin

    2012-01-01

    Particle filter for object tracking could achieve high tracking accuracy.  To track the object, this method generates a number of particles which is the representation of the candidate target object.  The location of target object is determined by particles and each weight. The disadvantage of conventional particle filter is the computational time especially on the computation of particle’s weight.  Particle filter with Gaussian weighting is proposed to accomplish the computational problem.  ...

  10. Spoken word memory traces within the human auditory cortex revealed by repetition priming and functional magnetic resonance imaging.

    Science.gov (United States)

    Gagnepain, Pierre; Chételat, Gael; Landeau, Brigitte; Dayan, Jacques; Eustache, Francis; Lebreton, Karine

    2008-05-14

    Previous neuroimaging studies in the visual domain have shown that neurons along the perceptual processing pathway retain the physical properties of written words, faces, and objects. The aim of this study was to reveal the existence of similar neuronal properties within the human auditory cortex. Brain activity was measured using functional magnetic resonance imaging during a repetition priming paradigm, with words and pseudowords heard in an acoustically degraded format. Both the amplitude and peak latency of the hemodynamic response (HR) were assessed to determine the nature of the neuronal signature of spoken word priming. A statistically significant stimulus type by repetition interaction was found in various bilateral auditory cortical areas, demonstrating either HR suppression and enhancement for repeated spoken words and pseudowords, respectively, or word-specific repetition suppression without any significant effects for pseudowords. Repetition latency shift only occurred with word-specific repetition suppression in the right middle/posterior superior temporal sulcus. In this region, both repetition suppression and latency shift were related to behavioral priming. Our findings highlight for the first time the existence of long-term spoken word memory traces within the human auditory cortex. The timescale of auditory information integration and the neuronal mechanisms underlying priming both appear to differ according to the level of representations coded by neurons. Repetition may "sharpen" word-nonspecific representations coding short temporal variations, whereas a complex interaction between the activation strength and temporal integration of neuronal activity may occur in neuronal populations coding word-specific representations within longer temporal windows.

  11. Exposure to a novel stimulus environment alters patterns of lateralization in avian auditory cortex.

    Science.gov (United States)

    Yang, L M; Vicario, D S

    2015-01-29

    Perceptual filters formed early in development provide an initial means of parsing the incoming auditory stream. However, these filters may not remain fixed, and may be updated by subsequent auditory input, such that, even in an adult organism, the auditory system undergoes plastic changes to achieve a more efficient representation of the recent auditory environment. Songbirds are an excellent model system for experimental studies of auditory phenomena due to many parallels between song learning in birds and language acquisition in humans. In the present study, we explored the effects of passive immersion in a novel heterospecific auditory environment on neural responses in caudo-medial neostriatum (NCM), a songbird auditory area similar to the secondary auditory cortex in mammals. In zebra finches, a well-studied species of songbirds, NCM responds selectively to conspecific songs and contains a neuronal memory for tutor and other familiar conspecific songs. Adult male zebra finches were randomly assigned to either a conspecific or heterospecific auditory environment. After 2, 4 or 9 days of exposure, subjects were presented with heterospecific and conspecific songs during awake electrophysiological recording. The neural response strength and rate of adaptation to the testing stimuli were recorded bilaterally. Controls exposed to conspecific environment sounds exhibited the normal pattern of hemispheric lateralization with higher absolute response strength and faster adaptation in the right hemisphere. The pattern of lateralization was fully reversed in birds exposed to heterospecific environment for 4 or 9 days and partially reversed in birds exposed to heterospecific environment for 2 days. Our results show that brief passive exposure to a novel category of sounds was sufficient to induce a gradual reorganization of the left and right secondary auditory cortices. These changes may reflect modification of perceptual filters to form a more efficient representation

  12. Auditory processing in the brainstem and audiovisual integration in humans studied with fMRI

    NARCIS (Netherlands)

    Slabu, Lavinia Mihaela

    2008-01-01

    Functional magnetic resonance imaging (fMRI) is a powerful technique because of the high spatial resolution and the noninvasiveness. The applications of the fMRI to the auditory pathway remain a challenge due to the intense acoustic scanner noise of approximately 110 dB SPL. The auditory system cons

  13. Representation of lateralization and tonotopy in primary versus secondary human auditory cortex

    NARCIS (Netherlands)

    Langers, Dave R. M.; Backes, Walter H.; van Dijk, Pim

    2007-01-01

    Functional MRI was performed to investigate differences in the basic functional organization of the primary and secondary auditory cortex regarding preferred stimulus lateratization and frequency. A modified sparse acquisition scheme was used to spatially map the characteristics of the auditory cort

  14. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review

    Directory of Open Access Journals (Sweden)

    Marie-Josée Castellanos

    2016-12-01

    Full Text Available Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  15. The Adverse Effects of Heavy Metals with and without Noise Exposure on the Human Peripheral and Central Auditory System: A Literature Review.

    Science.gov (United States)

    Castellanos, Marie-Josée; Fuente, Adrian

    2016-12-09

    Exposure to some chemicals in the workplace can lead to occupational chemical-induced hearing loss. Attention has mainly focused on the adverse auditory effects of solvents. However, other chemicals such as heavy metals have been also identified as ototoxic agents. The aim of this work was to review the current scientific knowledge about the adverse auditory effects of heavy metal exposure with and without co-exposure to noise in humans. PubMed and Medline were accessed to find suitable articles. A total of 49 articles met the inclusion criteria. Results from the review showed that no evidence about the ototoxic effects in humans of manganese is available. Contradictory results have been found for arsenic, lead and mercury as well as for the possible interaction between heavy metals and noise. All studies found in this review have found that exposure to cadmium and mixtures of heavy metals induce auditory dysfunction. Most of the studies investigating the adverse auditory effects of heavy metals in humans have investigated human populations exposed to lead. Some of these studies suggest peripheral and central auditory dysfunction induced by lead exposure. It is concluded that further evidence from human studies about the adverse auditory effects of heavy metal exposure is still required. Despite this issue, audiologists and other hearing health care professionals should be aware of the possible auditory effects of heavy metals.

  16. Music-induced cortical plasticity and lateral inhibition in the human auditory cortex as foundations for tonal tinnitus treatment

    Science.gov (United States)

    Pantev, Christo; Okamoto, Hidehiko; Teismann, Henning

    2012-01-01

    Over the past 15 years, we have studied plasticity in the human auditory cortex by means of magnetoencephalography (MEG). Two main topics nurtured our curiosity: the effects of musical training on plasticity in the auditory system, and the effects of lateral inhibition. One of our plasticity studies found that listening to notched music for 3 h inhibited the neuronal activity in the auditory cortex that corresponded to the center-frequency of the notch, suggesting suppression of neural activity by lateral inhibition. Subsequent research on this topic found that suppression was notably dependent upon the notch width employed, that the lower notch-edge induced stronger attenuation of neural activity than the higher notch-edge, and that auditory focused attention strengthened the inhibitory networks. Crucially, the overall effects of lateral inhibition on human auditory cortical activity were stronger than the habituation effects. Based on these results we developed a novel treatment strategy for tonal tinnitus—tailor-made notched music training (TMNMT). By notching the music energy spectrum around the individual tinnitus frequency, we intended to attract lateral inhibition to auditory neurons involved in tinnitus perception. So far, the training strategy has been evaluated in two studies. The results of the initial long-term controlled study (12 months) supported the validity of the treatment concept: subjective tinnitus loudness and annoyance were significantly reduced after TMNMT but not when notching spared the tinnitus frequencies. Correspondingly, tinnitus-related auditory evoked fields (AEFs) were significantly reduced after training. The subsequent short-term (5 days) training study indicated that training was more effective in the case of tinnitus frequencies ≤ 8 kHz compared to tinnitus frequencies >8 kHz, and that training should be employed over a long-term in order to induce more persistent effects. Further development and evaluation of TMNMT therapy

  17. Resolving the neural dynamics of visual and auditory scene processing in the human brain: a methodological approach

    Science.gov (United States)

    Teng, Santani

    2017-01-01

    In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019

  18. Dual Extended Kalman Filter for the Identification of Time-Varying Human Manual Control Behavior

    Science.gov (United States)

    Popovici, Alexandru; Zaal, Peter M. T.; Pool, Daan M.

    2017-01-01

    A Dual Extended Kalman Filter was implemented for the identification of time-varying human manual control behavior. Two filters that run concurrently were used, a state filter that estimates the equalization dynamics, and a parameter filter that estimates the neuromuscular parameters and time delay. Time-varying parameters were modeled as a random walk. The filter successfully estimated time-varying human control behavior in both simulated and experimental data. Simple guidelines are proposed for the tuning of the process and measurement covariance matrices and the initial parameter estimates. The tuning was performed on simulation data, and when applied on experimental data, only an increase in measurement process noise power was required in order for the filter to converge and estimate all parameters. A sensitivity analysis to initial parameter estimates showed that the filter is more sensitive to poor initial choices of neuromuscular parameters than equalization parameters, and bad choices for initial parameters can result in divergence, slow convergence, or parameter estimates that do not have a real physical interpretation. The promising results when applied to experimental data, together with its simple tuning and low dimension of the state-space, make the use of the Dual Extended Kalman Filter a viable option for identifying time-varying human control parameters in manual tracking tasks, which could be used in real-time human state monitoring and adaptive human-vehicle haptic interfaces.

  19. Correlates of perceptual awareness in human primary auditory cortex revealed by an informational masking experiment.

    Science.gov (United States)

    Wiegand, Katrin; Gutschalk, Alexander

    2012-05-15

    The presence of an auditory event may remain undetected in crowded environments, even when it is well above the sensory threshold. This effect, commonly known as informational masking, allows for isolating neural activity related to perceptual awareness, by comparing repetitions of the same physical stimulus where the target is either detected or not. Evidence from magnetoencephalography (MEG) suggests that auditory-cortex activity in the latency range 50-250 ms is closely coupled with perceptual awareness. Here, BOLD fMRI and MEG were combined to investigate at which stage in the auditory cortex neural correlates of conscious auditory perception can be observed. Participants were asked to indicate the perception of a regularly repeating target tone, embedded within a random multi-tone masking background. Results revealed widespread activation within the auditory cortex for detected target tones, which was delayed but otherwise similar to the activation of an unmasked control stimulus. The contrast of detected versus undetected targets revealed activity confined to medial Heschl's gyrus, where the primary auditory cortex is located. These results suggest that activity related to conscious perception involves the primary auditory cortex and is not restricted to activity in secondary areas.

  20. Particle Filter with Gaussian Weighting for Human Tracking

    Directory of Open Access Journals (Sweden)

    T. Basaruddin

    2012-12-01

    Full Text Available Particle filter for object tracking could achieve high tracking accuracy. To track the object, this method generates a number of particles which is the representation of the candidate target object. The location of target object is determined by particles and each weight. The disadvantage of conventional particle filter is the computational time especially on the computation of particles weight. Particle filter with Gaussian weighting is proposed to accomplish the computational problem. There are two main stages in this method, i.e. prediction and update. The difference between the conventional particle filter and particle filter with Gaussian weighting is in the update Stage. In the conventional particle filter method, the weight is calculated in each particle, meanwhile in the proposed method, only certain particles weight is calculated, and the remain particles weight is calculated using the Gaussian weighting. Experiment is done using artificial dataset. The average accuracy is 80,862%. The high accuracy that is achieved by this method could use for the real time system tracking.

  1. Prediction of Human's Ability in Sound Localization Based on the Statistical Properties of Spike Trains along the Brainstem Auditory Pathway

    Directory of Open Access Journals (Sweden)

    Ram Krips

    2014-01-01

    Full Text Available The minimum audible angle test which is commonly used for evaluating human localization ability depends on interaural time delay, interaural level differences, and spectral information about the acoustic stimulus. These physical properties are estimated at different stages along the brainstem auditory pathway. The interaural time delay is ambiguous at certain frequencies, thus confusion arises as to the source of these frequencies. It is assumed that in a typical minimum audible angle experiment, the brain acts as an unbiased optimal estimator and thus the human performance can be obtained by deriving optimal lower bounds. Two types of lower bounds are tested: the Cramer-Rao and the Barankin. The Cramer-Rao bound only takes into account the approximation of the true direction of the stimulus; the Barankin bound considers other possible directions that arise from the ambiguous phase information. These lower bounds are derived at the output of the auditory nerve and of the superior olivary complex where binaural cues are estimated. An agreement between human experimental data was obtained only when the superior olivary complex was considered and the Barankin lower bound was used. This result suggests that sound localization is estimated by the auditory nuclei using ambiguous binaural information.

  2. Functional organization for musical consonance and tonal pitch hierarchy in human auditory cortex.

    Science.gov (United States)

    Bidelman, Gavin M; Grall, Jeremy

    2014-11-01

    Pitch relationships in music are characterized by their degree of consonance, a hierarchical perceptual quality that distinguishes how pleasant musical chords/intervals sound to the ear. The origins of consonance have been debated since the ancient Greeks. To elucidate the neurobiological mechanisms underlying these musical fundamentals, we recorded neuroelectric brain activity while participants listened passively to various chromatic musical intervals (simultaneously sounding pitches) varying in their perceptual pleasantness (i.e., consonance/dissonance). Dichotic presentation eliminated acoustic and peripheral contributions that often confound explanations of consonance. We found that neural representations for pitch in early human auditory cortex code perceptual features of musical consonance and follow a hierarchical organization according to music-theoretic principles. These neural correlates emerge pre-attentively within ~ 150 ms after the onset of pitch, are segregated topographically in superior temporal gyrus with a rightward hemispheric bias, and closely mirror listeners' behavioral valence preferences for the chromatic tone combinations inherent to music. A perceptual-based organization implies that parallel to the phonetic code for speech, elements of music are mapped within early cerebral structures according to higher-order, perceptual principles and the rules of Western harmony rather than simple acoustic attributes.

  3. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  4. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, Fatemeh; Kinnaird, Catherine; Wood, Scott; Bloomberg, Jacob; Mulavara, Ajitkumar; Seidler, Rachael

    2014-01-01

    The aim of the current study was to characterize the brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit saccular Vestibular Evoked Myogenic Potentials (VEMP) (Colebatch & Halmagyi 1992; Colebatch et al. 1994). Some researchers have reported that airconducted skull tap elicits both saccular and utricle VEMPs, while being faster and less irritating for the subjects (Curthoys et al. 2009, Wackym et al., 2012). However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying the vestibular disorders related to otolith deficits. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, pre and post central gyri, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation (Bottini et al., 1994; Dieterich et al., 2003; Emri et al., 2003; Schlindwein et al., 2008; Janzen et al., 2008). Here we hypothesized that the skull tap elicits the similar pattern of cortical activity as the auditory tone burst. Subjects put on a set of MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in supine position, with eyes closed. All subjects received both forms of the stimulation, however, the order of stimulation with auditory tone burst and air-conducted skull tap was counterbalanced across subjects. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular cortex, resulting in vestibular response (Halmagyi et al., 1995). Auditory tone bursts were also delivered for comparison. To validate

  5. Chemical UV Filters Mimic the Effect of Progesterone on Ca(2+) Signaling in Human Sperm Cells

    DEFF Research Database (Denmark)

    Rehfeld, A; Dissing, S; Skakkebæk, N E

    2016-01-01

    Progesterone released by cumulus cells surrounding the egg induces a Ca(2+) influx into human sperm cells via the cationic channel of sperm (CatSper) Ca(2+) channel and controls multiple Ca(2+)-dependent responses essential for fertilization. We hypothesized that chemical UV filters may mimic...... competitively inhibited progesterone-induced Ca(2+) signals. In vivo exposure studies are needed to investigate whether UV filter exposure affects human fertility....

  6. The neurochemical basis of human cortical auditory processing: combining proton magnetic resonance spectroscopy and magnetoencephalography

    Directory of Open Access Journals (Sweden)

    Tollkötter Melanie

    2006-08-01

    Full Text Available Abstract Background A combination of magnetoencephalography and proton magnetic resonance spectroscopy was used to correlate the electrophysiology of rapid auditory processing and the neurochemistry of the auditory cortex in 15 healthy adults. To assess rapid auditory processing in the left auditory cortex, the amplitude and decrement of the N1m peak, the major component of the late auditory evoked response, were measured during rapidly successive presentation of acoustic stimuli. We tested the hypothesis that: (i the amplitude of the N1m response and (ii its decrement during rapid stimulation are associated with the cortical neurochemistry as determined by proton magnetic resonance spectroscopy. Results Our results demonstrated a significant association between the concentrations of N-acetylaspartate, a marker of neuronal integrity, and the amplitudes of individual N1m responses. In addition, the concentrations of choline-containing compounds, representing the functional integrity of membranes, were significantly associated with N1m amplitudes. No significant association was found between the concentrations of the glutamate/glutamine pool and the amplitudes of the first N1m. No significant associations were seen between the decrement of the N1m (the relative amplitude of the second N1m peak and the concentrations of N-acetylaspartate, choline-containing compounds, or the glutamate/glutamine pool. However, there was a trend for higher glutamate/glutamine concentrations in individuals with higher relative N1m amplitude. Conclusion These results suggest that neuronal and membrane functions are important for rapid auditory processing. This investigation provides a first link between the electrophysiology, as recorded by magnetoencephalography, and the neurochemistry, as assessed by proton magnetic resonance spectroscopy, of the auditory cortex.

  7. Parcellation of Human and Monkey Core Auditory Cortex with fMRI Pattern Classification and Objective Detection of Tonotopic Gradient Reversals.

    Science.gov (United States)

    Schönwiesner, Marc; Dechent, Peter; Voit, Dirk; Petkov, Christopher I; Krumbholz, Katrin

    2015-10-01

    Auditory cortex (AC) contains several primary-like, or "core," fields, which receive thalamic input and project to non-primary "belt" fields. In humans, the organization and layout of core and belt auditory fields are still poorly understood, and most auditory neuroimaging studies rely on macroanatomical criteria, rather than functional localization of distinct fields. A myeloarchitectonic method has been suggested recently for distinguishing between core and belt fields in humans (Dick F, Tierney AT, Lutti A, Josephs O, Sereno MI, Weiskopf N. 2012. In vivo functional and myeloarchitectonic mapping of human primary auditory areas. J Neurosci. 32:16095-16105). We propose a marker for core AC based directly on functional magnetic resonance imaging (fMRI) data and pattern classification. We show that a portion of AC in Heschl's gyrus classifies sound frequency more accurately than other regions in AC. Using fMRI data from macaques, we validate that the region where frequency classification performance is significantly above chance overlaps core auditory fields, predominantly A1. Within this region, we measure tonotopic gradients and estimate the locations of the human homologues of the core auditory subfields A1 and R. Our results provide a functional rather than anatomical localizer for core AC. We posit that inter-individual variability in the layout of core AC might explain disagreements between results from previous neuroimaging and cytological studies.

  8. Acute stress alters auditory selective attention in humans independent of HPA: a study of evoked potentials.

    Directory of Open Access Journals (Sweden)

    Ludger Elling

    Full Text Available BACKGROUND: Acute stress is a stereotypical, but multimodal response to a present or imminent challenge overcharging an organism. Among the different branches of this multimodal response, the consequences of glucocorticoid secretion have been extensively investigated, mostly in connection with long-term memory (LTM. However, stress responses comprise other endocrine signaling and altered neuronal activity wholly independent of pituitary regulation. To date, knowledge of the impact of such "paracorticoidal" stress responses on higher cognitive functions is scarce. We investigated the impact of an ecological stressor on the ability to direct selective attention using event-related potentials in humans. Based on research in rodents, we assumed that a stress-induced imbalance of catecholaminergic transmission would impair this ability. METHODOLOGY/PRINCIPAL FINDINGS: The stressor consisted of a single cold pressor test. Auditory negative difference (Nd and mismatch negativity (MMN were recorded in a tonal dichotic listening task. A time series of such tasks confirmed an increased distractibility occurring 4-7 minutes after onset of the stressor as reflected by an attenuated Nd. Salivary cortisol began to rise 8-11 minutes after onset when no further modulations in the event-related potentials (ERP occurred, thus precluding a causal relationship. This effect may be attributed to a stress-induced activation of mesofrontal dopaminergic projections. It may also be attributed to an activation of noradrenergic projections. Known characteristics of the modulation of ERP by different stress-related ligands were used for further disambiguation of causality. The conjuncture of an attenuated Nd and an increased MMN might be interpreted as indicating a dopaminergic influence. The selective effect on the late portion of the Nd provides another tentative clue for this. CONCLUSIONS/SIGNIFICANCE: Prior studies have deliberately tracked the adrenocortical influence

  9. DEVELOPING ‘STANDARD NOVEL ‘VAD’ TECHNIQUE’ AND ‘NOISE FREE SIGNALS’ FOR SPEECH AUDITORY BRAINSTEM RESPONSES FOR HUMAN SUBJECTS

    OpenAIRE

    Ranganadh Narayanam*

    2016-01-01

    In this research as a first step we have concentrated on collecting non-intra cortical EEG data of Brainstem Speech Evoked Potentials from human subjects in an Audiology Lab in University of Ottawa. The problems we have considered are the most advanced and most essential problems of interest in Auditory Neural Signal Processing area in the world: The first problem is the Voice Activity Detection (VAD) in Speech Auditory Brainstem Responses (ABR); The second problem is to identify the best De-...

  10. Mapping the after-effects of theta burst stimulation on the human auditory cortex with functional imaging.

    Science.gov (United States)

    Andoh, Jamila; Zatorre, Robert J

    2012-09-12

    Auditory cortex pertains to the processing of sound, which is at the basis of speech or music-related processing. However, despite considerable recent progress, the functional properties and lateralization of the human auditory cortex are far from being fully understood. Transcranial Magnetic Stimulation (TMS) is a non-invasive technique that can transiently or lastingly modulate cortical excitability via the application of localized magnetic field pulses, and represents a unique method of exploring plasticity and connectivity. It has only recently begun to be applied to understand auditory cortical function. An important issue in using TMS is that the physiological consequences of the stimulation are difficult to establish. Although many TMS studies make the implicit assumption that the area targeted by the coil is the area affected, this need not be the case, particularly for complex cognitive functions which depend on interactions across many brain regions. One solution to this problem is to combine TMS with functional Magnetic resonance imaging (fMRI). The idea here is that fMRI will provide an index of changes in brain activity associated with TMS. Thus, fMRI would give an independent means of assessing which areas are affected by TMS and how they are modulated. In addition, fMRI allows the assessment of functional connectivity, which represents a measure of the temporal coupling between distant regions. It can thus be useful not only to measure the net activity modulation induced by TMS in given locations, but also the degree to which the network properties are affected by TMS, via any observed changes in functional connectivity. Different approaches exist to combine TMS and functional imaging according to the temporal order of the methods. Functional MRI can be applied before, during, after, or both before and after TMS. Recently, some studies interleaved TMS and fMRI in order to provide online mapping of the functional changes induced by TMS. However, this

  11. Discrimination of timbre in early auditory responses of the human brain.

    Directory of Open Access Journals (Sweden)

    Jaeho Seol

    Full Text Available BACKGROUND: The issue of how differences in timbre are represented in the neural response still has not been well addressed, particularly with regard to the relevant brain mechanisms. Here we employ phasing and clipping of tones to produce auditory stimuli differing to describe the multidimensional nature of timbre. We investigated the auditory response and sensory gating as well, using by magnetoencephalography (MEG. METHODOLOGY/PRINCIPAL FINDINGS: Thirty-five healthy subjects without hearing deficit participated in the experiments. Two different or same tones in timbre were presented through conditioning (S1-testing (S2 paradigm as a pair with an interval of 500 ms. As a result, the magnitudes of auditory M50 and M100 responses were different with timbre in both hemispheres. This result might support that timbre, at least by phasing and clipping, is discriminated in the auditory early processing. The second response in a pair affected by S1 in the consecutive stimuli occurred in M100 of the left hemisphere, whereas both M50 and M100 responses to S2 only in the right hemisphere reflected whether two stimuli in a pair were the same or not. Both M50 and M100 magnitudes were different with the presenting order (S1 vs. S2 for both same and different conditions in the both hemispheres. CONCLUSIONS/SIGNIFICANCES: Our results demonstrate that the auditory response depends on timbre characteristics. Moreover, it was revealed that the auditory sensory gating is determined not by the stimulus that directly evokes the response, but rather by whether or not the two stimuli are identical in timbre.

  12. Atypical Bilateral Brain Synchronization in the Early Stage of Human Voice Auditory Processing in Young Children with Autism

    Science.gov (United States)

    Kurita, Toshiharu; Kikuchi, Mitsuru; Yoshimura, Yuko; Hiraishi, Hirotoshi; Hasegawa, Chiaki; Takahashi, Tetsuya; Hirosawa, Tetsu; Furutani, Naoki; Higashida, Haruhiro; Ikeda, Takashi; Mutou, Kouhei; Asada, Minoru; Minabe, Yoshio

    2016-01-01

    Autism spectrum disorder (ASD) has been postulated to involve impaired neuronal cooperation in large-scale neural networks, including cortico-cortical interhemispheric circuitry. In the context of ASD, alterations in both peripheral and central auditory processes have also attracted a great deal of interest because these changes appear to represent pathophysiological processes; therefore, many prior studies have focused on atypical auditory responses in ASD. The auditory evoked field (AEF), recorded by magnetoencephalography, and the synchronization of these processes between right and left hemispheres was recently suggested to reflect various cognitive abilities in children. However, to date, no previous study has focused on AEF synchronization in ASD subjects. To assess global coordination across spatially distributed brain regions, the analysis of Omega complexity from multichannel neurophysiological data was proposed. Using Omega complexity analysis, we investigated the global coordination of AEFs in 3–8-year-old typically developing (TD) children (n = 50) and children with ASD (n = 50) in 50-ms time-windows. Children with ASD displayed significantly higher Omega complexities compared with TD children in the time-window of 0–50 ms, suggesting lower whole brain synchronization in the early stage of the P1m component. When we analyzed the left and right hemispheres separately, no significant differences in any time-windows were observed. These results suggest lower right-left hemispheric synchronization in children with ASD compared with TD children. Our study provides new evidence of aberrant neural synchronization in young children with ASD by investigating auditory evoked neural responses to the human voice. PMID:27074011

  13. Robust recovery of human motion from video using Kalman filters and virtual humans.

    Science.gov (United States)

    Cerveri, P; Pedotti, A; Ferrigno, G

    2003-08-01

    In sport science, as in clinical gait analysis, optoelectronic motion capture systems based on passive markers are widely used to recover human movement. By processing the corresponding image points, as recorded by multiple cameras, the human kinematics is resolved through multistage processing involving spatial reconstruction, trajectory tracking, joint angle determination, and derivative computation. Key problems with this approach are that marker data can be indistinct, occluded or missing from certain cameras, that phantom markers may be present, and that both 3D reconstruction and tracking may fail. In this paper, we present a novel technique, based on state space filters, that directly estimates the kinematical variables of a virtual mannequin (biomechanical model) from 2D measurements, that is, without requiring 3D reconstruction and tracking. Using Kalman filters, the configuration of the model in terms of joint angles, first and second order derivatives is automatically updated in order to minimize the distances, as measured on TV-cameras, between the 2D measured markers placed on the subject and the corresponding back-projected virtual markers located on the model. The Jacobian and Hessian matrices of the nonlinear observation function are computed through a multidimensional extension of Stirling's interpolation formula. Extensive experiments on simulated and real data confirmed the reliability of the developed system that is robust against false matching and severe marker occlusions. In addition, we show how the proposed technique can be extended to account for skin artifacts and model inaccuracy.

  14. Amplitude and phase equalization of stimuli for click evoked auditory brainstem responses.

    Science.gov (United States)

    Beutelmann, Rainer; Laumen, Geneviève; Tollin, Daniel; Klump, Georg M

    2015-01-01

    Although auditory brainstem responses (ABRs), the sound-evoked brain activity in response to transient sounds, are routinely measured in humans and animals there are often differences in ABR waveform morphology across studies. One possible reason may be the method of stimulus calibration. To explore this hypothesis, click-evoked ABRs were measured from seven ears in four Mongolian gerbils (Meriones unguiculatus) using three common spectrum calibration strategies: Minimum phase filter, linear phase filter, and no filter. The results show significantly higher ABR amplitude and signal-to-noise ratio, and better waveform resolution with the minimum phase filtered click than with the other strategies.

  15. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  16. Modeling human auditory evoked brainstem responses based on nonlinear cochlear processing

    DEFF Research Database (Denmark)

    Harte, James; Rønne, Filip Munch; Dau, Torsten

    2010-01-01

    (ABR) to transient sounds and frequency following responses (FFR) to tones. The model includes important cochlear processing stages (Zilany and Bruce, 2006) such as basilar-membrane (BM) tuning and compression, inner hair-cell (IHC) transduction, and IHC auditory-nerve (AN) synapse adaptation...

  17. Auditory Pattern Memory: Mechanisms of Tonal Sequence Discrimination by Human Observers

    Science.gov (United States)

    1988-10-30

    and Creelman (1977) in a study of categorical perception. Tanner’s model included a short-term decaying memory for the acoustic input to the system plus...auditory pattern components, J. &Coust. Soc. 91 Am., 76, 1037- 1044. Macmillan, N. A., Kaplan H. L., & Creelman , C. D. (1977). The psychophysics of

  18. Remarks on human body posture estimation from silhouette image based on heuristic rules and Kalman filter

    Science.gov (United States)

    Takahashi, Kazuhiko; Naemura, Masahide

    2005-12-01

    This paper proposes a human body posture estimation method based on analysis of human silhouette and Kalman filter. The proposed method is based on both the heuristically extraction method of estimating the significant points of human body and the contour analysis of the human silhouette. The 2D coordinates of the human body's significant points, such as top of the head, and tips of feet, are located by applying the heuristically extraction method to the human silhouette, those of tips of hands are obtained by using the result of the contour analysis, and the joints of elbows and knees are estimated by introducing some heuristic rules to the contour image of the human silhouette. The estimated results are optimized and tracked by using Kalman filter. The proposed estimation method is implemented on a personal computer and runs in real-time. Experimental results show both the feasibility and the effectiveness of the proposed method for estimating human body postures.

  19. Functional Imaging of Human Vestibular Cortex Activity Elicited by Skull Tap and Auditory Tone Burst

    Science.gov (United States)

    Noohi, F.; Kinnaird, C.; Wood, S.; Bloomberg, J.; Mulavara, A.; Seidler, R.

    2016-01-01

    The current study characterizes brain activation in response to two modes of vestibular stimulation: skull tap and auditory tone burst. The auditory tone burst has been used in previous studies to elicit either the vestibulo-spinal reflex (saccular-mediated colic Vestibular Evoked Myogenic Potentials (cVEMP)), or the ocular muscle response (utricle-mediated ocular VEMP (oVEMP)). Some researchers have reported that air-conducted skull tap elicits both saccular and utricle-mediated VEMPs, while being faster and less irritating for the subjects. However, it is not clear whether the skull tap and auditory tone burst elicit the same pattern of cortical activity. Both forms of stimulation target the otolith response, which provides a measurement of vestibular function independent from semicircular canals. This is of high importance for studying otolith-specific deficits, including gait and balance problems that astronauts experience upon returning to earth. Previous imaging studies have documented activity in the anterior and posterior insula, superior temporal gyrus, inferior parietal lobule, inferior frontal gyrus, and the anterior cingulate cortex in response to different modes of vestibular stimulation. Here we hypothesized that skull taps elicit similar patterns of cortical activity as the auditory tone bursts, and previous vestibular imaging studies. Subjects wore bilateral MR compatible skull tappers and headphones inside the 3T GE scanner, while lying in the supine position, with eyes closed. Subjects received both forms of the stimulation in a counterbalanced fashion. Pneumatically powered skull tappers were placed bilaterally on the cheekbones. The vibration of the cheekbone was transmitted to the vestibular system, resulting in the vestibular cortical response. Auditory tone bursts were also delivered for comparison. To validate our stimulation method, we measured the ocular VEMP outside of the scanner. This measurement showed that both skull tap and auditory

  20. Exploiting independent filter bandwidth of human factor cepstral coefficients in automatic speech recognition

    Science.gov (United States)

    Skowronski, Mark D.; Harris, John G.

    2004-09-01

    Mel frequency cepstral coefficients (MFCC) are the most widely used speech features in automatic speech recognition systems, primarily because the coefficients fit well with the assumptions used in hidden Markov models and because of the superior noise robustness of MFCC over alternative feature sets such as linear prediction-based coefficients. The authors have recently introduced human factor cepstral coefficients (HFCC), a modification of MFCC that uses the known relationship between center frequency and critical bandwidth from human psychoacoustics to decouple filter bandwidth from filter spacing. In this work, the authors introduce a variation of HFCC called HFCC-E in which filter bandwidth is linearly scaled in order to investigate the effects of wider filter bandwidth on noise robustness. Experimental results show an increase in signal-to-noise ratio of 7 dB over traditional MFCC algorithms when filter bandwidth increases in HFCC-E. An important attribute of both HFCC and HFCC-E is that the algorithms only differ from MFCC in the filter bank coefficients: increased noise robustness using wider filters is achieved with no additional computational cost.

  1. Efficient Recognition of Human Faces from Video in Particle Filter

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Face recognition from video requires dealing with uncertainty both in tracking and recognition. This paper proposed an effective method for face recognition from video. In order to realize simultaneous tracking and recognition, fisherface-based recognition is combined with tracking into one model. This model is then embedded into particle filter to perform face recognition from video. In order to improve the robustness of tracking, an expectation maximization (EM) algorithm was adopted to update the appearance model. The experimental results show that the proposed method can perform well in tracking and recognition even in poor conditions such as occlusion and remarkable change in lighting.

  2. 体感刺激激活人脑听觉皮层%Somatosensory stimulation activates human auditory cortex

    Institute of Scientific and Technical Information of China (English)

    蒋宇钢; 周倩; 张明铭

    2011-01-01

    目的 初步探讨体感刺激是否可以激活听觉皮层,为听觉皮层作为多重感觉皮层提供证据.方法 5例颞叶占位的患者术中暴露颞上回后,分别接受声音(100 dB)和体感刺激,通过光学成像在红光下(610±10)nm观察初级、次级听觉皮层(BA41、42)反射内源光信号变化特征.结果 红光(610±lO)nm下我们观察到听觉刺激后听觉皮层(BA41、42)明显激活(n=5),体感刺激后可观察到和听觉刺激时相似区域的激活,且响应的方式与听觉刺激无明显差异(n=4).结论 体感刺激可激活听觉皮层,这可能是听觉皮层作为多重感觉皮层的一个证据.%Objective This paper is to explore whether somatosensory stimulation could activate human auditory cortex (AI) and provide a new evidence for the multisensory center.Methods Intrinsic optical signals from the superior temporal gyrus were measured intraoperatively in five anesthetized patients with temporal lobe tumors.We detected the activation of the auditory cortex ( BA41、42) during auditory and somatosensory stimuli respectively under red illuminating light (610 ± 10 ) nm.Results Under the illumination of red light wavelength we clearly detected hemodynamic responses in the primary and secondary auditory cortex ( BA 41,42) by the stimulus of the 100 dB clicks ( n =5) and similar response area during the somatosensory paradigm ( n =4).Conclusion Somatosensory stimulation can activate the auditory cortex which may be a new evidence of the multisensory center.

  3. Triple-Quantum Filtered NMR Imaging of Sodium -23 in the Human Brain

    Science.gov (United States)

    Keltner, John Robinson

    In the past multiple-quantum filtered imaging of biexponential relaxation sodium-23 nuclei in the human brain has been limited by low signal to noise ratios; this thesis demonstrates that such imaging is feasible when using a modified gradient-selected triple-quantum filter at a repetition time which maximizes the signal to noise ratio. Nuclear magnetic resonance imaging of biexponential relaxation sodium-23 (^{23}Na) nuclei in the human brain may be useful for detecting ischemia, cancer, and pathophysiology related to manic-depression. Multiple -quantum filters may be used to selectively image biexponential relaxation ^{23}Na signals since these filters suppress single-exponential relaxation ^{23}Na signals. In this thesis, the typical repetition times (200 -300 ms) used for in vivo multiple-quantum filtered ^{23}Na experiments are shown to be approximately 5 times greater than the optimal repetition time which maximizes multiple-quantum filtered SNR. Calculations and experimental verification show that the gradient-selected triple-quantum (GS3Q) filtered SNR for ^ {23}Na in a 4% agarose gel increases by a factor of two as the repetition time decreases from 300 ms to 55 ms. It is observed that a simple reduction of repetition time also increases spurious single-quantum signals from GS3Q filtered experiments. Irreducible superoperator calculations have been used to design a modified GS3Q filter which more effectively suppresses the spurious single-quantum signals. The modified GS3Q filter includes a preparatory crusher gradient and two-step-phase cycling. Using the modified GS3Q filter and a repetition time of 70 ms, a three dimensional triple-quantum filtered image of a phantom modelling ^{23} Na in the brain was obtained. The phantom consisted of two 4 cm diameter spheres inside of a 8.5 cm x 7 cm ellipsoid. The two spheres contained 0.012 and 0.024 M ^{23}Na in 4% agarose gel. Surrounding the spheres and inside the ellipsoid was 0.03 M aqueous ^{23}Na. The image

  4. Sparse Spectro-Temporal Receptive Fields Based on Multi-Unit and High-Gamma Responses in Human Auditory Cortex.

    Directory of Open Access Journals (Sweden)

    Rick L Jenison

    Full Text Available Spectro-Temporal Receptive Fields (STRFs were estimated from both multi-unit sorted clusters and high-gamma power responses in human auditory cortex. Intracranial electrophysiological recordings were used to measure responses to a random chord sequence of Gammatone stimuli. Traditional methods for estimating STRFs from single-unit recordings, such as spike-triggered-averages, tend to be noisy and are less robust to other response signals such as local field potentials. We present an extension to recently advanced methods for estimating STRFs from generalized linear models (GLM. A new variant of regression using regularization that penalizes non-zero coefficients is described, which results in a sparse solution. The frequency-time structure of the STRF tends toward grouping in different areas of frequency-time and we demonstrate that group sparsity-inducing penalties applied to GLM estimates of STRFs reduces the background noise while preserving the complex internal structure. The contribution of local spiking activity to the high-gamma power signal was factored out of the STRF using the GLM method, and this contribution was significant in 85 percent of the cases. Although the GLM methods have been used to estimate STRFs in animals, this study examines the detailed structure directly from auditory cortex in the awake human brain. We used this approach to identify an abrupt change in the best frequency of estimated STRFs along posteromedial-to-anterolateral recording locations along the long axis of Heschl's gyrus. This change correlates well with a proposed transition from core to non-core auditory fields previously identified using the temporal response properties of Heschl's gyrus recordings elicited by click-train stimuli.

  5. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  6. Induction of plasticity in the human motor cortex by pairing an auditory stimulus with TMS.

    Science.gov (United States)

    Sowman, Paul F; Dueholm, Søren S; Rasmussen, Jesper H; Mrachacz-Kersting, Natalie

    2014-01-01

    Acoustic stimuli can cause a transient increase in the excitability of the motor cortex. The current study leverages this phenomenon to develop a method for testing the integrity of auditorimotor integration and the capacity for auditorimotor plasticity. We demonstrate that appropriately timed transcranial magnetic stimulation (TMS) of the hand area, paired with auditorily mediated excitation of the motor cortex, induces an enhancement of motor cortex excitability that lasts beyond the time of stimulation. This result demonstrates for the first time that paired associative stimulation (PAS)-induced plasticity within the motor cortex is applicable with auditory stimuli. We propose that the method developed here might provide a useful tool for future studies that measure auditory-motor connectivity in communication disorders.

  7. Spectro-temporal analysis of complex sounds in the human auditory system

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2009-01-01

    Most sounds encountered in our everyday life carry information in terms of temporal variations of their envelopes. These envelope variations, or amplitude modulations, shape the basic building blocks for speech, music, and other complex sounds. Often a mixture of such sounds occurs in natural...... acoustic scenes, with each of the sounds having its own characteristic pattern of amplitude modulations. Complex sounds, such as speech, share the same amplitude modulations across a wide range of frequencies. This "comodulation" is an important characteristic of these sounds since it can enhance...... in conditions which are sensitive to cochlear suppression. The fourth chapter examines the role of cognitive processing in different stimulus paradigms: CMR, binaural masking level differences and modulation detection interference are investigated in contexts of auditory grouping. It is shown that auditory...

  8. Differential maturation of brain signal complexity in the human auditory and visual system

    Directory of Open Access Journals (Sweden)

    Sarah Lippe

    2009-11-01

    Full Text Available Brain development carries with it a large number of structural changes at the local level which impact on the functional interactions of distributed neuronal networks for perceptual processing. Such changes enhance information processing capacity, which can be indexed by estimation of neural signal complexity. Here, we show that during development, EEG signal complexity increases from one month to 5 years of age in response to auditory and visual stimulation. However, the rates of change in complexity were not equivalent for the two responses. Infants’ signal complexity for the visual condition was greater than auditory signal complexity, whereas adults showed the same level of complexity to both types of stimuli. The differential rates of complexity change may reflect a combination of innate and experiential factors on the structure and function of the two sensory systems.

  9. The Effect of Temporal Context on the Sustained Pitch Response in Human Auditory Cortex

    OpenAIRE

    Gutschalk, Alexander; Patterson, Roy D.; Scherg, Michael; Uppenkamp, Stefan; Rupp, André

    2006-01-01

    Recent neuroimaging studies have shown that activity in lateral Heschl’s gyrus covaries specifically with the strength of musical pitch. Pitch strength is important for the perceptual distinctiveness of an acoustic event, but in complex auditory scenes, the distinctiveness of an event also depends on its context. In this magnetoencephalography study, we evaluate how temporal context influences the sustained pitch response (SPR) in lateral Heschl’s gyrus. In 2 sequences of continuously alterna...

  10. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS).

    Science.gov (United States)

    Issa, Mohamad; Bisconti, Silvia; Kovelman, Ioulia; Kileny, Paul; Basura, Gregory J

    2016-01-01

    Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex) and non-ROI (adjacent nonauditory cortices) during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS). Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  11. Human Auditory and Adjacent Nonauditory Cerebral Cortices Are Hypermetabolic in Tinnitus as Measured by Functional Near-Infrared Spectroscopy (fNIRS

    Directory of Open Access Journals (Sweden)

    Mohamad Issa

    2016-01-01

    Full Text Available Tinnitus is the phantom perception of sound in the absence of an acoustic stimulus. To date, the purported neural correlates of tinnitus from animal models have not been adequately characterized with translational technology in the human brain. The aim of the present study was to measure changes in oxy-hemoglobin concentration from regions of interest (ROI; auditory cortex and non-ROI (adjacent nonauditory cortices during auditory stimulation and silence in participants with subjective tinnitus appreciated equally in both ears and in nontinnitus controls using functional near-infrared spectroscopy (fNIRS. Control and tinnitus participants with normal/near-normal hearing were tested during a passive auditory task. Hemodynamic activity was monitored over ROI and non-ROI under episodic periods of auditory stimulation with 750 or 8000 Hz tones, broadband noise, and silence. During periods of silence, tinnitus participants maintained increased hemodynamic responses in ROI, while a significant deactivation was seen in controls. Interestingly, non-ROI activity was also increased in the tinnitus group as compared to controls during silence. The present results demonstrate that both auditory and select nonauditory cortices have elevated hemodynamic activity in participants with tinnitus in the absence of an external auditory stimulus, a finding that may reflect basic science neural correlates of tinnitus that ultimately contribute to phantom sound perception.

  12. Perceptual demand modulates activation of human auditory cortex in response to task-irrelevant sounds.

    Science.gov (United States)

    Sabri, Merav; Humphries, Colin; Verber, Matthew; Mangalathu, Jain; Desai, Anjali; Binder, Jeffrey R; Liebenthal, Einat

    2013-09-01

    In the visual modality, perceptual demand on a goal-directed task has been shown to modulate the extent to which irrelevant information can be disregarded at a sensory-perceptual stage of processing. In the auditory modality, the effect of perceptual demand on neural representations of task-irrelevant sounds is unclear. We compared simultaneous ERPs and fMRI responses associated with task-irrelevant sounds across parametrically modulated perceptual task demands in a dichotic-listening paradigm. Participants performed a signal detection task in one ear (Attend ear) while ignoring task-irrelevant syllable sounds in the other ear (Ignore ear). Results revealed modulation of syllable processing by auditory perceptual demand in an ROI in middle left superior temporal gyrus and in negative ERP activity 130-230 msec post stimulus onset. Increasing the perceptual demand in the Attend ear was associated with a reduced neural response in both fMRI and ERP to task-irrelevant sounds. These findings are in support of a selection model whereby ongoing perceptual demands modulate task-irrelevant sound processing in auditory cortex.

  13. Particle Filter with Binary Gaussian Weighting and Support Vector Machine for Human Pose Interpretation

    Directory of Open Access Journals (Sweden)

    Indah Agustien

    2010-10-01

    Full Text Available Human pose interpretation using Particle filter with Binary Gaussian Weighting and Support Vector Machine is proposed. In the proposed system, Particle filter is used to track human object, then this human object is skeletonized using thinning algorithm and classified using Support Vector Machine. The classification is to identify human pose, whether a normal or abnormal behavior. Here Particle filter is modified through weight calculation using Gaussiandistribution to reduce the computational time. The modified particle filter consists of four main phases. First, particles are generated to predict target’s location. Second, weight of certain particles is calculated and these particles are used to build Gaussian distribution. Third, weight of all particles is calculated based on Gaussian distribution. Fourth, update particles based on each weight. The modified particle filter could reduce computational time of object tracking since this method does not have to calculate particle’s weight one by one. To calculate weight, the proposed method builds Gaussian distribution and calculates particle’s weight using this distribution. Through experiment using video data taken in front of cashier of convenient store, the proposed method reduced computational time in tracking process until 68.34% in average compare to the conventional one, meanwhile the accuracy of tracking with this new method is comparable with particle filter method i.e. 90.3%. Combination particle filter with binary Gaussian weighting and support vector machine is promising for advanced early crime scene investigation.

  14. Histological Basis of Laminar MRI Patterns in High Resolution Images of Fixed Human Auditory Cortex

    Science.gov (United States)

    Wallace, Mark N.; Cronin, Matthew J.; Bowtell, Richard W.; Scott, Ian S.; Palmer, Alan R.; Gowland, Penny A.

    2016-01-01

    Functional magnetic resonance imaging (fMRI) studies of the auditory region of the temporal lobe would benefit from the availability of image contrast that allowed direct identification of the primary auditory cortex, as this region cannot be accurately located using gyral landmarks alone. Previous work has suggested that the primary area can be identified in magnetic resonance (MR) images because of its relatively high myelin content. However, MR images are also affected by the iron content of the tissue and in this study we sought to confirm that different MR image contrasts did correlate with the myelin content in the gray matter and were not primarily affected by iron content as is the case in the primary visual and somatosensory areas. By imaging blocks of fixed post-mortem cortex in a 7 T scanner and then sectioning them for histological staining we sought to assess the relative contribution of myelin and iron to the gray matter contrast in the auditory region. Evaluating the image contrast in T2*-weighted images and quantitative R2* maps showed a reasonably high correlation between the myelin density of the gray matter and the intensity of the MR images. The correlation with T1-weighted phase sensitive inversion recovery (PSIR) images was better than with the previous two image types, and there were clearly differentiated borders between adjacent cortical areas in these images. A significant amount of iron was present in the auditory region, but did not seem to contribute to the laminar pattern of the cortical gray matter in MR images. Similar levels of iron were present in the gray and white matter and although iron was present in fibers within the gray matter, these fibers were fairly uniformly distributed across the cortex. Thus, we conclude that T1- and T2*-weighted imaging sequences do demonstrate the relatively high myelin levels that are characteristic of the deep layers in primary auditory cortex and allow it and some of the surrounding areas to be

  15. Sound identification in human auditory cortex: Differential contribution of local field potentials and high gamma power as revealed by direct intracranial recordings.

    Science.gov (United States)

    Nourski, Kirill V; Steinschneider, Mitchell; Rhone, Ariane E; Oya, Hiroyuki; Kawasaki, Hiroto; Howard, Matthew A; McMurray, Bob

    2015-09-01

    High gamma power has become the principal means of assessing auditory cortical activation in human intracranial studies, albeit at the expense of low frequency local field potentials (LFPs). It is unclear whether limiting analyses to high gamma impedes ability of clarifying auditory cortical organization. We compared the two measures obtained from posterolateral superior temporal gyrus (PLST) and evaluated their relative utility in sound categorization. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli (consonant-vowel syllables varying in voicing and place of articulation and control tones) elicited robust evoked potentials and high gamma activity on PLST. LFPs had greater across-subject variability, yet yielded higher classification accuracy, relative to high gamma power. Classification was enhanced by including temporal detail of LFPs and combining LFP and high gamma. We conclude that future studies should consider utilizing both LFP and high gamma when investigating the functional organization of human auditory cortex. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons.

    Science.gov (United States)

    Dolležal, Lena-Vanessa; Itatani, Naoya; Günther, Stefanie; Klump, Georg M

    2012-12-01

    Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.

  17. Bayer Filter Snapshot Hyperspectral Fundus Camera for Human Retinal Imaging.

    Science.gov (United States)

    Kaluzny, Joel; Li, Hao; Liu, Wenzhong; Nesper, Peter; Park, Justin; Zhang, Hao F; Fawzi, Amani A

    2017-04-01

    To demonstrate the versatility and performance of a compact Bayer filter snapshot hyperspectral fundus camera for in-vivo clinical applications including retinal oximetry and macular pigment optical density measurements. 12 healthy volunteers were recruited under an Institutional Review Board (IRB) approved protocol. Fundus images were taken with a custom hyperspectral camera with a spectral range of 460-630 nm. We determined retinal vascular oxygen saturation (sO2) for the healthy population using the captured spectra by least squares curve fitting. Additionally, macular pigment optical density was localized and visualized using multispectral reflectometry from selected wavelengths. We successfully determined the mean sO2 of arteries and veins of each subject (ages 21-80) with excellent intrasubject repeatability (1.4% standard deviation). The mean arterial sO2 for all subjects was 90.9% ± 2.5%, whereas the mean venous sO2 for all subjects was 64.5% ± 3.5%. The mean artery-vein (A-V) difference in sO2 varied between 20.5% and 31.9%. In addition, we were able to reveal and quantify macular pigment optical density. We demonstrated a single imaging tool capable of oxygen saturation and macular pigment density measurements in vivo. The unique combination of broad spectral range, high spectral-spatial resolution, rapid and robust imaging capability, and compact design make this system a valuable tool for multifunction spectral imaging that can be easily performed in a clinic setting.

  18. Matched filtering determines human visual search in natural images

    NARCIS (Netherlands)

    Toet, A.

    2011-01-01

    The structural image similarity index (SSIM), introduced by Wang and Bovik (IEEE Signal Processing Letters 9-3, pp. 81-84, 2002) measures the similarity between images in terms of luminance, contrast en structure. It has successfully been deployed to model human visual perception of image

  19. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Directory of Open Access Journals (Sweden)

    Vincent Isnard

    Full Text Available Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs. This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  20. Auditory Sketches: Very Sparse Representations of Sounds Are Still Recognizable.

    Science.gov (United States)

    Isnard, Vincent; Taffou, Marine; Viaud-Delmon, Isabelle; Suied, Clara

    2016-01-01

    Sounds in our environment like voices, animal calls or musical instruments are easily recognized by human listeners. Understanding the key features underlying this robust sound recognition is an important question in auditory science. Here, we studied the recognition by human listeners of new classes of sounds: acoustic and auditory sketches, sounds that are severely impoverished but still recognizable. Starting from a time-frequency representation, a sketch is obtained by keeping only sparse elements of the original signal, here, by means of a simple peak-picking algorithm. Two time-frequency representations were compared: a biologically grounded one, the auditory spectrogram, which simulates peripheral auditory filtering, and a simple acoustic spectrogram, based on a Fourier transform. Three degrees of sparsity were also investigated. Listeners were asked to recognize the category to which a sketch sound belongs: singing voices, bird calls, musical instruments, and vehicle engine noises. Results showed that, with the exception of voice sounds, very sparse representations of sounds (10 features, or energy peaks, per second) could be recognized above chance. No clear differences could be observed between the acoustic and the auditory sketches. For the voice sounds, however, a completely different pattern of results emerged, with at-chance or even below-chance recognition performances, suggesting that the important features of the voice, whatever they are, were removed by the sketch process. Overall, these perceptual results were well correlated with a model of auditory distances, based on spectro-temporal excitation patterns (STEPs). This study confirms the potential of these new classes of sounds, acoustic and auditory sketches, to study sound recognition.

  1. Distributed neural signatures of natural audiovisual speech and music in the human auditory cortex.

    Science.gov (United States)

    Salmi, Juha; Koistinen, Olli-Pekka; Glerean, Enrico; Jylänki, Pasi; Vehtari, Aki; Jääskeläinen, Iiro P; Mäkelä, Sasu; Nummenmaa, Lauri; Nummi-Kuisma, Katarina; Nummi, Ilari; Sams, Mikko

    2016-12-06

    During a conversation or when listening to music, auditory and visual information are combined automatically into audiovisual objects. However, it is still poorly understood how specific type of visual information shapes neural processing of sounds in lifelike stimulus environments. Here we applied multi-voxel pattern analysis to investigate how naturally matching visual input modulates supratemporal cortex activity during processing of naturalistic acoustic speech, singing and instrumental music. Bayesian logistic regression classifiers with sparsity-promoting priors were trained to predict whether the stimulus was audiovisual or auditory, and whether it contained piano playing, speech, or singing. The predictive performances of the classifiers were tested by leaving one participant at a time for testing and training the model using the remaining 15 participants. The signature patterns associated with unimodal auditory stimuli encompassed distributed locations mostly in the middle and superior temporal gyrus (STG/MTG). A pattern regression analysis, based on a continuous acoustic model, revealed that activity in some of these MTG and STG areas were associated with acoustic features present in speech and music stimuli. Concurrent visual stimulus modulated activity in bilateral MTG (speech), lateral aspect of right anterior STG (singing), and bilateral parietal opercular cortex (piano). Our results suggest that specific supratemporal brain areas are involved in processing complex natural speech, singing, and piano playing, and other brain areas located in anterior (facial speech) and posterior (music-related hand actions) supratemporal cortex are influenced by related visual information. Those anterior and posterior supratemporal areas have been linked to stimulus identification and sensory-motor integration, respectively. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Attentional influences on functional mapping of speech sounds in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Elbert Thomas

    2004-07-01

    Full Text Available Abstract Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content, and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization, while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization. Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG, and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

  3. Animal models for auditory streaming.

    Science.gov (United States)

    Itatani, Naoya; Klump, Georg M

    2017-02-19

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons' response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'.

  4. Using Gaussian Process Annealing Particle Filter for 3D Human Tracking

    Directory of Open Access Journals (Sweden)

    Michael Rudzsky

    2008-01-01

    Full Text Available We present an approach for human body parts tracking in 3D with prelearned motion models using multiple cameras. Gaussian process annealing particle filter is proposed for tracking in order to reduce the dimensionality of the problem and to increase the tracker's stability and robustness. Comparing with a regular annealed particle filter-based tracker, we show that our algorithm can track better for low frame rate videos. We also show that our algorithm is capable of recovering after a temporal target loss.

  5. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  6. Disentangling the effects of phonation and articulation: Hemispheric asymmetries in the auditory N1m response of the human brain

    Directory of Open Access Journals (Sweden)

    Mäkinen Ville

    2005-10-01

    Full Text Available Abstract Background The cortical activity underlying the perception of vowel identity has typically been addressed by manipulating the first and second formant frequency (F1 & F2 of the speech stimuli. These two values, originating from articulation, are already sufficient for the phonetic characterization of vowel category. In the present study, we investigated how the spectral cues caused by articulation are reflected in cortical speech processing when combined with phonation, the other major part of speech production manifested as the fundamental frequency (F0 and its harmonic integer multiples. To study the combined effects of articulation and phonation we presented vowels with either high (/a/ or low (/u/ formant frequencies which were driven by three different types of excitation: a natural periodic pulseform reflecting the vibration of the vocal folds, an aperiodic noise excitation, or a tonal waveform. The auditory N1m response was recorded with whole-head magnetoencephalography (MEG from ten human subjects in order to resolve whether brain events reflecting articulation and phonation are specific to the left or right hemisphere of the human brain. Results The N1m responses for the six stimulus types displayed a considerable dynamic range of 115–135 ms, and were elicited faster (~10 ms by the high-formant /a/ than by the low-formant /u/, indicating an effect of articulation. While excitation type had no effect on the latency of the right-hemispheric N1m, the left-hemispheric N1m elicited by the tonally excited /a/ was some 10 ms earlier than that elicited by the periodic and the aperiodic excitation. The amplitude of the N1m in both hemispheres was systematically stronger to stimulation with natural periodic excitation. Also, stimulus type had a marked (up to 7 mm effect on the source location of the N1m, with periodic excitation resulting in more anterior sources than aperiodic and tonal excitation. Conclusion The auditory brain areas

  7. Effects of Auditory Input in Individuation Tasks

    Science.gov (United States)

    Robinson, Christopher W.; Sloutsky, Vladimir M.

    2008-01-01

    Under many conditions auditory input interferes with visual processing, especially early in development. These interference effects are often more pronounced when the auditory input is unfamiliar than when the auditory input is familiar (e.g. human speech, pre-familiarized sounds, etc.). The current study extends this research by examining how…

  8. Deriving cochlear delays in humans using otoacoustic emissions and auditory evoked potentials

    DEFF Research Database (Denmark)

    Pigasse, Gilles

    A great deal of the processing of incoming sounds to the auditory system occurs within the cochlear. The organ of Corti within the cochlea has differing mechanical properties along its length that broadly gives rise to frequency selectivity. Its stiffness is at maximum at the base and decreases...... relation between frequency and travel time in the cochlea defines the cochlear delay. This delay is directly associated with the signal analysis occurring in the inner ear and is therefore of primary interest to get a better knowledge of this organ. It is possible to estimate the cochlear delay by direct...... and ASSR latency estimates demonstrated similar rates of latency decrease as a function of frequency. It was further concluded, in this thesis, that OAE measurements are the most appropriate to estimate cochlear delays, since they had the best repeatability and the shortest recording time. Preliminary...

  9. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  10. A Method for Designing FIR Filters with Arbitrary Magnitude Characteristic Used for Modeling Human Audiogram

    Directory of Open Access Journals (Sweden)

    SZOPOS, E.

    2012-05-01

    Full Text Available This paper presents an iterative method for designing FIR filters that implement arbitrary magnitude characteristics, defined by the user through a set of frequency-magnitude points (frequency samples. The proposed method is based on the non-uniform frequency sampling algorithm. For each iteration a new set of frequency samples is generated, by processing the set used in the previous run; this implies changing the samples location around the previous frequency values and adjusting their magnitude through interpolation. If necessary, additional samples can be introduced, as well. After each iteration the magnitude characteristic of the resulting filter is determined by using the non-uniform DFT and compared with the required one; if the errors are larger than the acceptable levels (set by the user a new iteration is run; the length of the resulting filter and the values of its coefficients are also taken into consideration when deciding a re-run. To demonstrate the efficiency of the proposed method a tool for designing FIR filters that match human audiograms was implemented in LabVIEW. It was shown that the resulting filters have smaller coefficients than the standard one, and can also have lower order, while the errors remain relatively small.

  11. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments.

  12. Human Auditory Communication Disturbances Due To Road Traffic Noise Pollution in Calabar City, Nigeria

    Directory of Open Access Journals (Sweden)

    E. O. Obisung

    2016-10-01

    Full Text Available Study on auditory communication disturbances due to road transportation noise in Calabar Urban City, Nigeria was carried out. Both subjective (psycho-social and objective (acoustical measurements were made for a period of twelve months. Questionnaire/interview schedules containing pertinent questions were administered randomly to 500 respondents of age 15 year and above, who were also with a good level of literacy skills (reading writing and leaving in houses sited along or parallel to busy road, with heavy traffic volume for at least three (3 years. The questionnaires provided the psycho-social responses of respondents used in this study, their reactions to road traffic noise effect on communication activities (listening to radio, listening and watching television, verbal communication between individuals, speech communication and telephone/GSM communication. Acoustical measurements were made at the facades of respondents' houses facing the road using precision digital sound level meter, Bruel and Kjaer (B & K type 732 following ISO standards 1996. The meter read the road traffic noise levels at measurement sites (facades of respondents' houses. From the results obtained in this study residents of Calabar City suffer serious communication interferences as a result of excessive road traffic noise levels. The noise indices used for this study were LAeq and Ldn. Noise levels obtained were over 93 dB(A (daytime and 60 dB(A, (nighttime for LAeq and 80 dB(A for Ldn. These far exceeded the recommended theoretical values of 45-55 and 70 dB(A, for LAeqand Ldn respectively. A-weighted sound pressure level (SPLS range between 87.0 and 100.0 dB(A. In this study it was also observed that over 98% of the respondents reported their television watching/radio listening disturbed, 99% recorded telephone/GSM disturbed, and 98% reported face-to-face verbal conversation disturbed, and 98% reported speech communication disturbed. The background noise levels (BNLs of

  13. Relevance of UV filter/sunscreen product photostability to human safety.

    Science.gov (United States)

    Nash, J Frank; Tanner, Paul R

    2014-01-01

    Photostability or photo-instability of sunscreen products is most often discussed in undesirable terms with respect to human safety. The health risks, specifically associated with sunscreens, photostable or photo-unstable, include phototoxic/photoirritation or photoallergic responses and, longer-term, an increased risk of skin cancers or photoageing. The aims of this paper are to define photostability/photo-instability and objectively assess the acute and chronic toxicological consequences from the human exposure to UV filter/sunscreens and any probable photo-degradation products. The reported prevalence of photoirritation and photoallergic responses to sunscreens is rare compared with adverse events, for example, skin irritation or sensitization, produced by cosmetics or topically applied drugs and do not directly implicate potential photo-degradation products of UV filters. Moreover, for at least one photo-unstable combination, octyl methoxycinnamate and avobenzone, the long-term benefits to humans, i.e., reduction in skin cancers, seem to outweigh any potential adverse consequences attributed to photo-degradation. Sunscreen products are formulated to achieve maximum efficacy which, by necessity and design, incorporate measures to support and promote photostability since all organic UV filters have the potential to photo-degrade. Current performance measures, in vivo SPF and in vitro UVA, conducted under standardized conditions, in part account for photostability. The concerns expressed when considering human exposure to potential photo-unstable UV filters or sunscreen products may not manifest as health risks under conditions of use. Still, improvement in sunscreen product photostability continues to be a key strategic objective for manufacturers. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  14. Neural adaptation to silence in the human auditory cortex: a magnetoencephalographic study.

    Science.gov (United States)

    Okamoto, Hidehiko; Kakigi, Ryusuke

    2014-01-01

    Previous studies demonstrated that a decrement in the N1m response, a major deflection in the auditory evoked response, with sound repetition was mainly caused by bottom-up driven neural refractory periods following brain activation due to sound stimulations. However, it currently remains unknown whether this decrement occurs with a repetition of silences, which do not induce refractoriness. In the present study, we investigated decrements in N1m responses elicited by five repetitive silences in a continuous pure tone and by five repetitive pure tones in silence using magnetoencephalography. Repetitive sound stimulation differentially affected the N1m decrement in a sound type-dependent manner; while the N1m amplitude decreased from the 1st to the 2nd pure tone and remained constant from the 2nd to the 5th pure tone in silence, a gradual decrement was observed in the N1m amplitude from the 1st to the 5th silence embedded in a continuous pure tone. Our results suggest that neural refractoriness may mainly cause decrements in N1m responses elicited by trains of pure tones in silence, while habituation, which is a form of the implicit learning process, may play an important role in the N1m source strength decrements elicited by successive silences in a continuous pure tone.

  15. GF-GC Theory of Human Cognition: Differentiation of Short-Term Auditory and Visual Memory Factors.

    Science.gov (United States)

    McGhee, Ron; Lieberman, Lewis

    1994-01-01

    Study sought to determine whether separate short-term auditory and visual memory factors would emerge given a sufficient number of markers in a factor matrix. A principal component factor analysis with varimax rotation was performed. Short-term visual and short-term auditory memory factors emerged as expected. (RJM)

  16. Evidence against global attention filters selective for absolute bar-orientation in human vision.

    Science.gov (United States)

    Inverso, Matthew; Sun, Peng; Chubb, Charles; Wright, Charles E; Sperling, George

    2016-01-01

    The finding that an item of type A pops out from an array of distractors of type B typically is taken to support the inference that human vision contains a neural mechanism that is activated by items of type A but not by items of type B. Such a mechanism might be expected to yield a neural image in which items of type A produce high activation and items of type B low (or zero) activation. Access to such a neural image might further be expected to enable accurate estimation of the centroid of an ensemble of items of type A intermixed with to-be-ignored items of type B. Here, it is shown that as the number of items in stimulus displays is increased, performance in estimating the centroids of horizontal (vertical) items amid vertical (horizontal) distractors degrades much more quickly and dramatically than does performance in estimating the centroids of white (black) items among black (white) distractors. Together with previous findings, these results suggest that, although human vision does possess bottom-up neural mechanisms sensitive to abrupt local changes in bar-orientation, and although human vision does possess and utilize top-down global attention filters capable of selecting multiple items of one brightness or of one color from among others, it cannot use a top-down global attention filter capable of selecting multiple bars of a given absolute orientation and filtering bars of the opposite orientation in a centroid task.

  17. Effects of pre- and postnatal exposure to the UV-filter Octyl Methoxycinnamate (OMC) on the reproductive, auditory and neurological development of rat offspring

    DEFF Research Database (Denmark)

    Petersen, Marta Axelstad; Boberg, Julie; Hougaard, Karin Sørig

    2011-01-01

    Octyl Methoxycinnamate (OMC) is a frequently used UV-filter in sunscreens and other cosmetics. The aim of the present study was to address the potential endocrine disrupting properties of OMC, and to investigate how OMC induced changes in thyroid hormone levels would be related to the neurologica...

  18. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  19. Non-linear laws of echoic memory and auditory change detection in humans

    Directory of Open Access Journals (Sweden)

    Takeshima Yasuyuki

    2010-07-01

    Full Text Available Abstract Background The detection of any abrupt change in the environment is important to survival. Since memory of preceding sensory conditions is necessary for detecting changes, such a change-detection system relates closely to the memory system. Here we used an auditory change-related N1 subcomponent (change-N1 of event-related brain potentials to investigate cortical mechanisms underlying change detection and echoic memory. Results Change-N1 was elicited by a simple paradigm with two tones, a standard followed by a deviant, while subjects watched a silent movie. The amplitude of change-N1 elicited by a fixed sound pressure deviance (70 dB vs. 75 dB was negatively correlated with the logarithm of the interval between the standard sound and deviant sound (1, 10, 100, or 1000 ms, while positively correlated with the logarithm of the duration of the standard sound (25, 100, 500, or 1000 ms. The amplitude of change-N1 elicited by a deviance in sound pressure, sound frequency, and sound location was correlated with the logarithm of the magnitude of physical differences between the standard and deviant sounds. Conclusions The present findings suggest that temporal representation of echoic memory is non-linear and Weber-Fechner law holds for the automatic cortical response to sound changes within a suprathreshold range. Since the present results show that the behavior of echoic memory can be understood through change-N1, change-N1 would be a useful tool to investigate memory systems.

  20. IMM Filter Based Human Tracking Using a Distributed Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2014-01-01

    Full Text Available This paper proposes a human tracking approach in a distributed wireless sensor network. Most of the efforts on human tracking focus on vision techniques. However, most vision-based approaches to moving object detection involve intensive real-time computations. In this paper, we present an algorithm for human tracking using low-cost range wireless sensor nodes which can contribute lower computational burden based on a distributed computing system, while the centralized computing system often makes some information from sensors delay. Because the human target often moves with high maneuvering, the proposed algorithm applies the interacting multiple model (IMM filter techniques and a novel sensor node selection scheme developed considering both the tracking accuracy and the energy cost which is based on the tacking results of IMM filter at each time step. This paper also proposed a novel sensor management scheme which can manage the sensor node effectively during the sensor node selection and the tracking process. Simulations results show that the proposed approach can achieve superior tracking accuracy compared to the most recent human motion tracking scheme.

  1. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography.

    Science.gov (United States)

    Simon, Jonathan Z

    2015-02-01

    Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Using frequency analysis to improve the precision of human body posture algorithms based on Kalman filters.

    Science.gov (United States)

    Olivares, Alberto; Górriz, J M; Ramírez, J; Olivares, G

    2016-05-01

    With the advent of miniaturized inertial sensors many systems have been developed within the last decade to study and analyze human motion and posture, specially in the medical field. Data measured by the sensors are usually processed by algorithms based on Kalman Filters in order to estimate the orientation of the body parts under study. These filters traditionally include fixed parameters, such as the process and observation noise variances, whose value has large influence in the overall performance. It has been demonstrated that the optimal value of these parameters differs considerably for different motion intensities. Therefore, in this work, we show that, by applying frequency analysis to determine motion intensity, and varying the formerly fixed parameters accordingly, the overall precision of orientation estimation algorithms can be improved, therefore providing physicians with reliable objective data they can use in their daily practice.

  3. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  4. The specificity of stimulus-specific adaptation in human auditory cortex increases with repeated exposure to the adapting stimulus.

    Science.gov (United States)

    Briley, Paul M; Krumbholz, Katrin

    2013-12-01

    The neural response to a sensory stimulus tends to be more strongly reduced when the stimulus is preceded by the same, rather than a different, stimulus. This stimulus-specific adaptation (SSA) is ubiquitous across the senses. In hearing, SSA has been suggested to play a role in change detection as indexed by the mismatch negativity. This study sought to test whether SSA, measured in human auditory cortex, is caused by neural fatigue (reduction in neural responsiveness) or by sharpening of neural tuning to the adapting stimulus. For that, we measured event-related cortical potentials to pairs of pure tones with varying frequency separation and stimulus onset asynchrony (SOA). This enabled us to examine the relationship between the degree of specificity of adaptation as a function of frequency separation and the rate of decay of adaptation with increasing SOA. Using simulations of tonotopic neuron populations, we demonstrate that the fatigue model predicts independence of adaptation specificity and decay rate, whereas the sharpening model predicts interdependence. The data showed independence and thus supported the fatigue model. In a second experiment, we measured adaptation specificity after multiple presentations of the adapting stimulus. The multiple adapters produced more adaptation overall, but the effect was more specific to the adapting frequency. Within the context of the fatigue model, the observed increase in adaptation specificity could be explained by assuming a 2.5-fold increase in neural frequency selectivity. We discuss possible bottom-up and top-down mechanisms of this effect.

  5. Context-Dependent Encoding in the Human Auditory Brainstem Relates to Hearing Speech in Noise: Implications for Developmental Dyslexia

    National Research Council Canada - National Science Library

    Chandrasekaran, Bharath; Hornickel, Jane; Skoe, Erika; Nicol, Trent; Kraus, Nina

    2009-01-01

    We examined context-dependent encoding of speech in children with and without developmental dyslexia by measuring auditory brainstem responses to a speech syllable presented in a repetitive or variable context...

  6. Mutism and auditory agnosia due to bilateral insular damage--role of the insula in human communication.

    Science.gov (United States)

    Habib, M; Daquin, G; Milandre, L; Royere, M L; Rey, M; Lanteri, A; Salamon, G; Khalil, R

    1995-03-01

    We report a case of transient mutism and persistent auditory agnosia due to two successive ischemic infarcts mainly involving the insular cortex on both hemispheres. During the 'mutic' period, which lasted about 1 month, the patient did not respond to any auditory stimuli and made no effort to communicate. On follow-up examinations, language competences had re-appeared almost intact, but a massive auditory agnosia for non-verbal sounds was observed. From close inspection of lesion site, as determined with brain resonance imaging, and from a study of auditory evoked potentials, it is concluded that bilateral insular damage was crucial to both expressive and receptive components of the syndrome. The role of the insula in verbal and non-verbal communication is discussed in the light of anatomical descriptions of the pattern of connectivity of the insular cortex.

  7. Speech distortion measure based on auditory properties

    Institute of Scientific and Technical Information of China (English)

    CHEN Guo; HU Xiulin; ZHANG Yunyu; ZHU Yaoting

    2000-01-01

    The Perceptual Spectrum Distortion (PSD), based on auditory properties of human being, is presented to measure speech distortion. The PSD measure calculates the speech distortion distance by simulating the auditory properties of human being and converting short-time speech power spectrum to auditory perceptual spectrum. Preliminary simulative experiments in comparison with the Itakura measure have been done. The results show that the PSD measure is a perferable speech distortion measure and more consistent with subjective assessment of speech quality.

  8. 实时的 Gammatone听感知滤波器组的 FPGA实现%Real-time Implementation of Auditory Gammatone Filter Bank with FPGA

    Institute of Scientific and Technical Information of China (English)

    贾瑞; 李冬梅

    2015-01-01

    Gammatone Filter Bank (GTFB) is perfectly suitable for speech processing systems like digital hearing aids ,speech enhancement and speech recognition system .However ,the high computational complexity and non‐real‐time characteristic in the hardware synthesis restrict its practical application .In this paper ,an efficient core computation unit structure is proposed .Besides ,a low complexity synthesis algorithm is proposed and the real‐time characteristic is achieved by compensating the filter delay .The measurement results verified show that a 128‐band high‐efficient real‐time reconfigurable Gammatone Filter Bank is realized in FPGA .The design has only 20 ms delay and improve the dual‐microphone CASA speech enhancement system performance .%Gammatone听感知滤波器组能大幅度的提高数字助听器、语音增强和语音识别等语音处理系统的性能,但因其庞大复杂的计算量以及不可实时的综合方式限制了其实际应用。设计了一种高效的运算单元结构和复用架构,提出了一种通过延时补偿的方法简化其综合的方式,并在FPGA上实现了128通道的便于综合的可实时处理语音信号的Gammatone滤波器组。通过测试验证,该设计延时为20 ms ,满足实时性要求,能很好地重构语音信号,并提高语音增强系统的性能。

  9. Dynamic movement of N100m current sources in auditory evoked fields: comparison of ipsilateral versus contralateral responses in human auditory cortex.

    Science.gov (United States)

    Jin, Chun Yu; Ozaki, Isamu; Suzuki, Yasumi; Baba, Masayuki; Hashimoto, Isao

    2008-04-01

    We recorded auditory evoked magnetic fields (AEFs) to monaural 400Hz tone bursts and investigated spatio-temporal features of the N100m current sources in the both hemispheres during the time before the N100m reaches at the peak strength and 5ms after the peak. A hemispheric asymmetry was evaluated as the asymmetry index based on the ratio of N100m peak dipole strength between right and left hemispheres for either ear stimulation. The results of asymmetry indices showed right-hemispheric dominance for left ear stimulation but no hemispheric dominance for right ear stimulation. The current sources for N100m in both hemispheres in response to monaural 400Hz stimulation moved toward anterolateral direction along the long axis of the Heschl gyri during the time before it reaches the peak strength; the ipsilateral N100m sources were located slightly posterior to the contralateral N100m ones. The onset and peak latencies of the right hemispheric N100m in response to right ear stimulation are shorter than those of the left hemispheric N100m to left ear stimulation. The traveling distance of the right hemispheric N100m sources following right ear stimulation was longer than that for the left hemispheric ones following left ear stimulation. These results suggest the right-dominant hemispheric asymmetry in pure tone processing.

  10. Classification of Underwater Target Echoes Based on Auditory Perception Characteristics

    Institute of Scientific and Technical Information of China (English)

    Xiukun Li; Xiangxia Meng; Hang Liu; Mingye Liu

    2014-01-01

    In underwater target detection, the bottom reverberation has some of the same properties as the target echo, which has a great impact on the performance. It is essential to study the difference between target echo and reverberation. In this paper, based on the unique advantage of human listening ability on objects distinction, the Gammatone filter is taken as the auditory model. In addition, time-frequency perception features and auditory spectral features are extracted for active sonar target echo and bottom reverberation separation. The features of the experimental data have good concentration characteristics in the same class and have a large amount of differences between different classes, which shows that this method can effectively distinguish between the target echo and reverberation.

  11. Mismatch responses in the awake rat: evidence from epidural recordings of auditory cortical fields.

    Directory of Open Access Journals (Sweden)

    Fabienne Jung

    Full Text Available Detecting sudden environmental changes is crucial for the survival of humans and animals. In the human auditory system the mismatch negativity (MMN, a component of auditory evoked potentials (AEPs, reflects the violation of predictable stimulus regularities, established by the previous auditory sequence. Given the considerable potentiality of the MMN for clinical applications, establishing valid animal models that allow for detailed investigation of its neurophysiological mechanisms is important. Rodent studies, so far almost exclusively under anesthesia, have not provided decisive evidence whether an MMN analogue exists in rats. This may be due to several factors, including the effect of anesthesia. We therefore used epidural recordings in awake black hooded rats, from two auditory cortical areas in both hemispheres, and with bandpass filtered noise stimuli that were optimized in frequency and duration for eliciting MMN in rats. Using a classical oddball paradigm with frequency deviants, we detected mismatch responses at all four electrodes in primary and secondary auditory cortex, with morphological and functional properties similar to those known in humans, i.e., large amplitude biphasic differences that increased in amplitude with decreasing deviant probability. These mismatch responses significantly diminished in a control condition that removed the predictive context while controlling for presentation rate of the deviants. While our present study does not allow for disambiguating precisely the relative contribution of adaptation and prediction error processing to the observed mismatch responses, it demonstrates that MMN-like potentials can be obtained in awake and unrestrained rats.

  12. Spatiotemporal Filter for Visual Motion Integration from Pursuit Eye Movements in Humans and Monkeys.

    Science.gov (United States)

    Mukherjee, Trishna; Liu, Bing; Simoncini, Claudio; Osborne, Leslie C

    2017-02-08

    Despite the enduring interest in motion integration, a direct measure of the space-time filter that the brain imposes on a visual scene has been elusive. This is perhaps because of the challenge of estimating a 3D function from perceptual reports in psychophysical tasks. We take a different approach. We exploit the close connection between visual motion estimates and smooth pursuit eye movements to measure stimulus-response correlations across space and time, computing the linear space-time filter for global motion direction in humans and monkeys. Although derived from eye movements, we find that the filter predicts perceptual motion estimates quite well. To distinguish visual from motor contributions to the temporal duration of the pursuit motion filter, we recorded single-unit responses in the monkey middle temporal cortical area (MT). We find that pursuit response delays are consistent with the distribution of cortical neuron latencies and that temporal motion integration for pursuit is consistent with a short integration MT subpopulation. Remarkably, the visual system appears to preferentially weight motion signals across a narrow range of foveal eccentricities rather than uniformly over the whole visual field, with a transiently enhanced contribution from locations along the direction of motion. We find that the visual system is most sensitive to motion falling at approximately one-third the radius of the stimulus aperture. Hypothesizing that the visual drive for pursuit is related to the filtered motion energy in a motion stimulus, we compare measured and predicted eye acceleration across several other target forms.SIGNIFICANCE STATEMENT A compact model of the spatial and temporal processing underlying global motion perception has been elusive. We used visually driven smooth eye movements to find the 3D space-time function that best predicts both eye movements and perception of translating dot patterns. We found that the visual system does not appear to use

  13. A general auditory bias for handling speaker variability in speech? Evidence in humans and songbirds

    NARCIS (Netherlands)

    Kriengwatana, B.; Escudero, P.; Kerkhoven, A.H.; ten Cate, C.

    2015-01-01

    Different speakers produce the same speech sound differently, yet listeners are still able to reliably identify the speech sound. How listeners can adjust their perception to compensate for speaker differences in speech, and whether these compensatory processes are unique only to humans, is still

  14. BDNF Increases Survival and Neuronal Differentiation of Human Neural Precursor Cells Cotransplanted with a Nanofiber Gel to the Auditory Nerve in a Rat Model of Neuronal Damage

    Directory of Open Access Journals (Sweden)

    Yu Jiao

    2014-01-01

    Full Text Available Objectives. To study possible nerve regeneration of a damaged auditory nerve by the use of stem cell transplantation. Methods. We transplanted HNPCs to the rat AN trunk by the internal auditory meatus (IAM. Furthermore, we studied if addition of BDNF affects survival and phenotypic differentiation of the grafted HNPCs. A bioactive nanofiber gel (PA gel, in selected groups mixed with BDNF, was applied close to the implanted cells. Before transplantation, all rats had been deafened by a round window niche application of β-bungarotoxin. This neurotoxin causes a selective toxic destruction of the AN while keeping the hair cells intact. Results. Overall, HNPCs survived well for up to six weeks in all groups. However, transplants receiving the BDNF-containing PA gel demonstrated significantly higher numbers of HNPCs and neuronal differentiation. At six weeks, a majority of the HNPCs had migrated into the brain stem and differentiated. Differentiated human cells as well as neurites were observed in the vicinity of the cochlear nucleus. Conclusion. Our results indicate that human neural precursor cells (HNPC integration with host tissue benefits from additional brain derived neurotrophic factor (BDNF treatment and that these cells appear to be good candidates for further regenerative studies on the auditory nerve (AN.

  15. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex.

    Science.gov (United States)

    Scott, Gregory D; Karns, Christina M; Dow, Mark W; Stevens, Courtney; Neville, Helen J

    2014-01-01

    Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl's gyrus. In addition to reorganized auditory cortex (cross-modal plasticity), a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case), as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral vs. perifoveal visual stimulation (11-15° vs. 2-7°) in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl's gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl's gyrus) indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral vs. perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory, and multisensory and/or supramodal regions, such as posterior parietal cortex (PPC), frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal, and multisensory regions, to altered visual processing in congenitally deaf adults.

  16. Enhanced peripheral visual processing in congenitally deaf humans is supported by multiple brain regions, including primary auditory cortex

    Directory of Open Access Journals (Sweden)

    Gregory D. Scott

    2014-03-01

    Full Text Available Brain reorganization associated with altered sensory experience clarifies the critical role of neuroplasticity in development. An example is enhanced peripheral visual processing associated with congenital deafness, but the neural systems supporting this have not been fully characterized. A gap in our understanding of deafness-enhanced peripheral vision is the contribution of primary auditory cortex. Previous studies of auditory cortex that use anatomical normalization across participants were limited by inter-subject variability of Heschl’s gyrus. In addition to reorganized auditory cortex (cross-modal plasticity, a second gap in our understanding is the contribution of altered modality-specific cortices (visual intramodal plasticity in this case, as well as supramodal and multisensory cortices, especially when target detection is required across contrasts. Here we address these gaps by comparing fMRI signal change for peripheral versus perifoveal visual stimulation (11-15° vs. 2°-7° in congenitally deaf and hearing participants in a blocked experimental design with two analytical approaches: a Heschl’s gyrus region of interest analysis and a whole brain analysis. Our results using individually-defined primary auditory cortex (Heschl’s gyrus indicate that fMRI signal change for more peripheral stimuli was greater than perifoveal in deaf but not in hearing participants. Whole-brain analyses revealed differences between deaf and hearing participants for peripheral versus perifoveal visual processing in extrastriate visual cortex including primary auditory cortex, MT+/V5, superior-temporal auditory and multisensory and/or supramodal regions, such as posterior parietal cortex, frontal eye fields, anterior cingulate, and supplementary eye fields. Overall, these data demonstrate the contribution of neuroplasticity in multiple systems including primary auditory cortex, supramodal and multisensory regions, to altered visual processing in

  17. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  18. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory.

  19. Osteocyte apoptosis and absence of bone remodeling in human auditory ossicles and scleral ossicles of lower vertebrates: a mere coincidence or linked processes?

    Science.gov (United States)

    Palumbo, Carla; Cavani, Francesco; Sena, Paola; Benincasa, Marta; Ferretti, Marzia

    2012-03-01

    Considering the pivotal role as bone mechanosensors ascribed to osteocytes in bone adaptation to mechanical strains, the present study analyzed whether a correlation exists between osteocyte apoptosis and bone remodeling in peculiar bones, such as human auditory ossicles and scleral ossicles of lower vertebrates, which have been shown to undergo substantial osteocyte death and trivial or no bone turnover after cessation of growth. The investigation was performed with a morphological approach under LM (by means of an in situ end-labeling technique) and TEM. The results show that a large amount of osteocyte apoptosis takes place in both auditory and scleral ossicles after they reach their final size. Additionally, no morphological signs of bone remodeling were observed. These facts suggest that (1) bone remodeling is not necessarily triggered by osteocyte death, at least in these ossicles, and (2) bone remodeling does not need to mechanically adapt auditory and scleral ossicles since they appear to be continuously submitted to stereotyped stresses and strains; on the contrary, during the resorption phase, bone remodeling might severely impair the mechanical resistance of extremely small bony segments. Thus, osteocyte apoptosis could represent a programmed process devoted to make stable, when needed, bone structure and mechanical resistance.

  20. Survival of human embryonic stem cells implanted in the guinea pig auditory epithelium

    Science.gov (United States)

    Young Lee, Min; Hackelberg, Sandra; Green, Kari L.; Lunghamer, Kelly G.; Kurioka, Takaomi; Loomis, Benjamin R.; Swiderski, Donald L.; Duncan, R. Keith; Raphael, Yehoash

    2017-01-01

    Hair cells in the mature cochlea cannot spontaneously regenerate. One potential approach for restoring hair cells is stem cell therapy. However, when cells are transplanted into scala media (SM) of the cochlea, they promptly die due to the high potassium concentration. We previously described a method for conditioning the SM to make it more hospitable to implanted cells and showed that HeLa cells could survive for up to a week using this method. Here, we evaluated the survival of human embryonic stem cells (hESC) constitutively expressing GFP (H9 Cre-LoxP) in deaf guinea pig cochleae that were pre-conditioned to reduce potassium levels. GFP-positive cells could be detected in the cochlea for at least 7 days after the injection. The cells appeared spherical or irregularly shaped, and some were aggregated. Flushing SM with sodium caprate prior to transplantation resulted in a lower proportion of stem cells expressing the pluripotency marker Oct3/4 and increased cell survival. The data demonstrate that conditioning procedures aimed at transiently reducing the concentration of potassium in the SM facilitate survival of hESCs for at least one week. During this time window, additional procedures can be applied to initiate the differentiation of the implanted hESCs into new hair cells. PMID:28387239

  1. Tracking human position and lower body parts using Kalman and particle filters constrained by human biomechanics.

    Science.gov (United States)

    Martinez del Rincon, Jesús; Makris, Dimitrios; Orrite Urunuela, Carlos; Nebel, Jean-Christophe

    2011-02-01

    In this paper, a novel framework for visual tracking of human body parts is introduced. The approach presented demonstrates the feasibility of recovering human poses with data from a single uncalibrated camera by using a limb-tracking system based on a 2-D articulated model and a double-tracking strategy. Its key contribution is that the 2-D model is only constrained by biomechanical knowledge about human bipedal motion, instead of relying on constraints that are linked to a specific activity or camera view. These characteristics make our approach suitable for real visual surveillance applications. Experiments on a set of indoor and outdoor sequences demonstrate the effectiveness of our method on tracking human lower body parts. Moreover, a detail comparison with current tracking methods is presented.

  2. A Wearable-based and Markerless Human-manipulator Interface with Feedback Mechanism and Kalman Filters

    Directory of Open Access Journals (Sweden)

    Ping Zhang

    2015-11-01

    Full Text Available The objective of this paper is to develop a novel human-manipulator interface which incorporates wearable-based and markerless tracking to interact with the continuous movements of a human operator’s hand. Unlike traditional approaches, which usually include contacting devices or physical markers to track the human-limb movements, this interface enables registration of natural movement through a wireless wearable watch and a leap motion sensor. Due to sensor error and tracking failure, the measurements are not made with sufficient accuracy. Two Kalman filters are employed to compensate the noisy and incomplete measurements in real time. Furthermore, due to perceptive limitations and abnormal state signals, the operator is unable to achieve high precision and efficiency in robot manipulation; an adaptive multispace transformation method (AMT is therefore introduced, which serves as a secondary treatment. In addition, in order to allow two-way human-robot interaction, the proposed method provides a vibration feedback mechanism triggered by the wearable watch to call the operator’s attention to robot collision incidents or moments where the operator’s hand is in a transboundary state. This improves teleoperation.

  3. Relationships between observer and Kalman Filter models for human dynamic spatial orientation.

    Science.gov (United States)

    Selva, Pierre; Oman, Charles M

    2012-01-01

    How does the central nervous system (CNS) combine sensory information from semicircular canal, otolith, and visual systems into perceptions of rotation, translation and tilt? Over the past four decades, a variety of input-output ("black box") mathematical models have been proposed to predict human dynamic spatial orientation perception and eye movements. The models have proved useful in vestibular diagnosis, aircraft accident investigation, and flight simulator design. Experimental refinement continues. This paper briefly reviews the history of two widely known model families, the linear "Kalman Filter" and the nonlinear "Observer". Recent physiologic data supports the internal model assumptions common to both. We derive simple 1-D and 3-D examples of each model for vestibular inputs, and show why - despite apparently different structure and assumptions - the linearized model predictions are dynamically equivalent when the four free model parameters are adjusted to fit the same empirical data, and perceived head orientation remains near upright. We argue that the motion disturbance and sensor noise spectra employed in the Kalman Filter formulation may reflect normal movements in daily life and perceptual thresholds, and thus justify the interpretation that the CNS cue blending scheme may well minimize least squares angular velocity perceptual errors.

  4. Functional Magnetic Resonance Imaging Measures of Blood Flow Patterns in the Human Auditory Cortex in Response to Sound.

    Science.gov (United States)

    Huckins, Sean C.; Turner, Christopher W.; Doherty, Karen A.; Fonte, Michael M.; Szeverenyi, Nikolaus M.

    1998-01-01

    This study examined the feasibility of using functional magnetic resonance imaging (fMRI) in auditory research by testing the reliability of scanning parameters using high resolution and high signal-to-noise ratios. Findings indicated reproducibility within and across listeners for consonant-vowel speech stimuli and reproducible results within and…

  5. Evaluation of Evidence for Altered Behavior and Auditory Deficits in Fishes Due to Human-Generated Noise Sources

    Science.gov (United States)

    2006-04-01

    Rutilus rutilus). Some of the roach were exposed to cobalt , which reversibly blocks the responsiveness of lateral line receptors (Karlsen and Sand...cartilaginous fishes, such as pelagic and benthic sharks, skates, and rays, since their auditory systems have potentially important variations in

  6. Physiological activation of the human cerebral cortex during auditory perception and speech revealed by regional increases in cerebral blood flow

    DEFF Research Database (Denmark)

    Lassen, N A; Friberg, L

    1988-01-01

    by measuring regional cerebral blood flow CBF after intracarotid Xenon-133 injection are reviewed with emphasis on tests involving auditory perception and speech, and approach allowing to visualize Wernicke and Broca's areas and their contralateral homologues in vivo. The completely atraumatic tomographic CBF...

  7. Collecting Protein Biomarkers in Breath Using Electret Filters: A Preliminary Method on New Technical Model and Human Study.

    Directory of Open Access Journals (Sweden)

    Wang Li

    Full Text Available Biomarkers in exhaled breath are useful for respiratory disease diagnosis in human volunteers. Conventional methods that collect non-volatile biomarkers, however, necessitate an extensive dilution and sanitation processes that lowers collection efficiencies and convenience of use. Electret filter emerged in recent decade to collect virus biomarkers in exhaled breath given its simplicity and effectiveness. To investigate the capability of electret filters to collect protein biomarkers, a model that consists of an atomizer that produces protein aerosol and an electret filter that collects albumin and carcinoembryonic antigen-a typical biomarker in lung cancer development- from the atomizer is developed. A device using electret filter as the collecting medium is designed to collect human albumin from exhaled breath of 6 volunteers. Comparison of the collecting ability between the electret filter method and other 2 reported methods is finally performed based on the amounts of albumin collected from human exhaled breath. In conclusion, a decreasing collection efficiency ranging from 17.6% to 2.3% for atomized albumin aerosol and 42% to 12.5% for atomized carcinoembryonic antigen particles is found; moreover, an optimum volume of sampling human exhaled breath ranging from 100 L to 200 L is also observed; finally, the self-designed collecting device shows a significantly better performance in collecting albumin from human exhaled breath than the exhaled breath condensate method (p0.05. In summary, electret filters are potential in collecting non-volatile biomarkers in human exhaled breath not only because it was simpler, cheaper and easier to use than traditional methods but also for its better collecting performance.

  8. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  9. A cascaded two-step Kalman filter for estimation of human body segment orientation using MEMS-IMU.

    Science.gov (United States)

    Zihajehzadeh, S; Loh, D; Lee, M; Hoskinson, R; Park, E J

    2014-01-01

    Orientation of human body segments is an important quantity in many biomechanical analyses. To get robust and drift-free 3-D orientation, raw data from miniature body worn MEMS-based inertial measurement units (IMU) should be blended in a Kalman filter. Aiming at less computational cost, this work presents a novel cascaded two-step Kalman filter orientation estimation algorithm. Tilt angles are estimated in the first step of the proposed cascaded Kalman filter. The estimated tilt angles are passed to the second step of the filter for yaw angle calculation. The orientation results are benchmarked against the ones from a highly accurate tactical grade IMU. Experimental results reveal that the proposed algorithm provides robust orientation estimation in both kinematically and magnetically disturbed conditions.

  10. Adaptation of Gabor filters for simulation of human preattentive mechanism for a mobile robot

    Science.gov (United States)

    Kulkarni, Naren; Naghdy, Golshah A.

    1993-08-01

    Vision guided mobile robot navigation is complex and requires analysis of tremendous amounts of information in real time. In order to simplify the task and reduce the amount of information, human preattentive mechanism can be adapted [Nag90]. During the preattentive search the scene is analyzed rapidly but in sufficient detail for the attention to be focused on the `area of interest.' The `area of interest' can further be scrutinized in more detail for recognition purposes. This `area of interest' can be a text message to facilitate navigation. Gabor filters and an automated turning mechanism are used to isolate the `area of interest.' These regions are subsequently processed with optimal spatial resolution for perception tasks. This method has clear advantages over the global operators in that, after an initial search, it scans each region of interest with optimum resolution. This reduces the volume of information for recognition stages and ensures that no region is over or under estimated.

  11. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  12. The discovery of human auditory-motor entrainment and its role in the development of neurologic music therapy.

    Science.gov (United States)

    Thaut, Michael H

    2015-01-01

    The discovery of rhythmic auditory-motor entrainment in clinical populations was a historical breakthrough in demonstrating for the first time a neurological mechanism linking music to retraining brain and behavioral functions. Early pilot studies from this research center were followed up by a systematic line of research studying rhythmic auditory stimulation on motor therapies for stroke, Parkinson's disease, traumatic brain injury, cerebral palsy, and other movement disorders. The comprehensive effects on improving multiple aspects of motor control established the first neuroscience-based clinical method in music, which became the bedrock for the later development of neurologic music therapy. The discovery of entrainment fundamentally shifted and extended the view of the therapeutic properties of music from a psychosocially dominated view to a view using the structural elements of music to retrain motor control, speech and language function, and cognitive functions such as attention and memory. © 2015 Elsevier B.V. All rights reserved.

  13. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    R. Chellappa

    2008-03-01

    Full Text Available We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The “shape filter” has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  14. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    Directory of Open Access Journals (Sweden)

    Chellappa R

    2008-01-01

    Full Text Available Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the design of smoothing filters for feature extraction combined with the adoption of unnormalized conditional density weights. The "shape filter" has the overall form of the predicted 2D projection of the 3D model, while the cross-section of the filter is designed to collect the gradient responses along the shape. The 3D-model-based representation is designed to emphasize the changes in 2D object shape due to motion, while de-emphasizing the variations due to lighting and other imaging conditions. We have found that the set of sparse measurements using a relatively small number of particles is able to approximate the high-dimensional state distribution very effectively. As a measures to stabilize the tracking, the amount of random diffusion is effectively adjusted using a Kalman updating of the covariance matrix. For a complex problem of human body tracking, we have successfully employed constraints derived from joint angles and walking motion.

  15. Using the Auditory Hazard Assessment Algorithm for Humans (AHAAH) With Hearing Protection Software, Release MIL-STD-1474E

    Science.gov (United States)

    2013-12-01

    Conservation Conference, Albuquerque, NM, 10 pp (invited paper). Price, G. R. (1998) “Modeling impulse noise susceptibility in marine mammals ...Invited presentation to USNRL workshop on Noise and Marine Mammals , Washington, DC. Price, G. R. (1998) “Engineering issues in reducing auditory hazard...HC) MATL CMND MCMR RTB M J LEGGIERI FORT DETRICK MD 21702-5012 1 UNIV OF ALABAMA (HC) S A MCINERNY 1530 3RD AVE S BEC 356D

  16. Nicotine, Auditory Sensory Memory, and sustained Attention in a Human Ketamine Model of Schizophrenia: Moderating Influence of a Hallucinatory Trait

    Science.gov (United States)

    Knott, Verner; Shah, Dhrasti; Millar, Anne; McIntosh, Judy; Fisher, Derek; Blais, Crystal; Ilivitsky, Vadim

    2012-01-01

    Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR) agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low-level auditory sensory processes and higher-order attention-dependent operations. Objectives: As N-methyl-d-aspartate receptor (NMDAR) hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: (a) to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention), as indexed by the auditory event-related brain potential – mismatch negativity (MMN), and (b) to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD). Methods: In a randomized, double-blind, placebo-controlled design involving a low intravenous dose of ketamine (0.04 mg/kg) and a 4 mg dose of nicotine gum, MMN, and performance on a rapid visual information processing (RVIP) task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12) or higher (H-HD) for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed [reaction time (RT)] and accuracy (increased % hits and d′ and reduced false alarms) on the RVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d′, as well as RT were poorer in H-HD (vs. L-HD) and while hit rate and d′ was increased by nicotine in H-HD, RT was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairment and improved attention, particularly in individuals prone to HD. PMID:23060793

  17. Nicotine, auditory sensory memory and attention in a human ketamine model of schizophrenia: moderating influence of a hallucinatory trait

    Directory of Open Access Journals (Sweden)

    Verner eKnott

    2012-09-01

    Full Text Available Background: The procognitive actions of the nicotinic acetylcholine receptor (nAChR agonist nicotine are believed, in part, to motivate the excessive cigarette smoking in schizophrenia, a disorder associated with deficits in multiple cognitive domains, including low level auditory sensory processes and higher order attention-dependent operations. Objectives: As N-methyl-D-aspartate receptor (NMDAR hypofunction has been shown to contribute to these cognitive impairments, the primary aims of this healthy volunteer study were to: a to shed light on the separate and interactive roles of nAChR and NMDAR systems in the modulation of auditory sensory memory (and sustained attention, as indexed by the auditory event-related brain potential (ERP – mismatch negativity (MMN, and b to examine how these effects are moderated by a predisposition to auditory hallucinations/delusions (HD. Methods: In a randomized, double-blind, placebo controlled design involving a low intravenous dose of ketamine (.04 mg/kg and a 4 mg dose of nicotine gum, MMN and performance on a rapid visual information processing (RVIP task of sustained attention were examined in 24 healthy controls psychometrically stratified as being lower (L-HD, n = 12 or higher (H-HD for HD propensity. Results: Ketamine significantly slowed MMN, and reduced MMN in H-HD, with amplitude attenuation being blocked by the co-administration of nicotine. Nicotine significantly enhanced response speed (reaction time and accuracy (increased % hits and d΄ and reduced false alarms on the RIVIP, with improved performance accuracy being prevented when nicotine was administered with ketamine. Both % hits and d΄, as well as reaction time were poorer in H-HD (vs. L-HD and while hit rate and d΄ was increased by nicotine in H-HD, reaction time was slowed by ketamine in L-HD. Conclusions: Nicotine alleviated ketamine-induced sensory memory impairments and improved attention, particularly in individuals prone to HD.

  18. Short wavelength light filtering by the natural human lens and IOLs -- implications for entrainment of circadian rhythm

    DEFF Research Database (Denmark)

    Brøndsted, Adam Elias; Lundeman, Jesper Holm; Kessel, Line

    2013-01-01

    Photoentrainment of circadian rhythm begins with the stimulation of melanopsin containing retinal ganglion cells that respond directly to blue light. With age, the human lens becomes a strong colour filter attenuating transmission of short wavelengths. The purpose of the study was to examine...

  19. Adaptive filtering methods for identifying cross-frequency couplings in human EEG.

    Directory of Open Access Journals (Sweden)

    Jérôme Van Zaen

    Full Text Available Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.

  20. Adaptive filtering methods for identifying cross-frequency couplings in human EEG.

    Science.gov (United States)

    Van Zaen, Jérôme; Murray, Micah M; Meuli, Reto A; Vesin, Jean-Marc

    2013-01-01

    Oscillations have been increasingly recognized as a core property of neural responses that contribute to spontaneous, induced, and evoked activities within and between individual neurons and neural ensembles. They are considered as a prominent mechanism for information processing within and communication between brain areas. More recently, it has been proposed that interactions between periodic components at different frequencies, known as cross-frequency couplings, may support the integration of neuronal oscillations at different temporal and spatial scales. The present study details methods based on an adaptive frequency tracking approach that improve the quantification and statistical analysis of oscillatory components and cross-frequency couplings. This approach allows for time-varying instantaneous frequency, which is particularly important when measuring phase interactions between components. We compared this adaptive approach to traditional band-pass filters in their measurement of phase-amplitude and phase-phase cross-frequency couplings. Evaluations were performed with synthetic signals and EEG data recorded from healthy humans performing an illusory contour discrimination task. First, the synthetic signals in conjunction with Monte Carlo simulations highlighted two desirable features of the proposed algorithm vs. classical filter-bank approaches: resilience to broad-band noise and oscillatory interference. Second, the analyses with real EEG signals revealed statistically more robust effects (i.e. improved sensitivity) when using an adaptive frequency tracking framework, particularly when identifying phase-amplitude couplings. This was further confirmed after generating surrogate signals from the real EEG data. Adaptive frequency tracking appears to improve the measurements of cross-frequency couplings through precise extraction of neuronal oscillations.

  1. Age-Associated Reduction of Asymmetry in Human Central Auditory Function: A 1H-Magnetic Resonance Spectroscopy Study

    Directory of Open Access Journals (Sweden)

    Xianming Chen

    2013-01-01

    Full Text Available The aim of this study was to investigate the effects of age on hemispheric asymmetry in the auditory cortex after pure tone stimulation. Ten young and 8 older healthy volunteers took part in this study. Two-dimensional multivoxel 1H-magnetic resonance spectroscopy scans were performed before and after stimulation. The ratios of N-acetylaspartate (NAA, glutamate/glutamine (Glx, and γ-amino butyric acid (GABA to creatine (Cr were determined and compared between the two groups. The distribution of metabolites between the left and right auditory cortex was also determined. Before stimulation, left and right side NAA/Cr and right side GABA/Cr were significantly lower, whereas right side Glx/Cr was significantly higher in the older group compared with the young group. After stimulation, left and right side NAA/Cr and GABA/Cr were significantly lower, whereas left side Glx/Cr was significantly higher in the older group compared with the young group. There was obvious asymmetry in right side Glx/Cr and left side GABA/Cr after stimulation in young group, but not in older group. In summary, there is marked hemispheric asymmetry in auditory cortical metabolites following pure tone stimulation in young, but not older adults. This reduced asymmetry in older adults may at least in part underlie the speech perception difficulties/presbycusis experienced by aging adults.

  2. LANGUAGE EXPERIENCE SHAPES PROCESSING OF PITCH RELEVANT INFORMATION IN THE HUMAN BRAINSTEM AND AUDITORY CORTEX: ELECTROPHYSIOLOGICAL EVIDENCE.

    Science.gov (United States)

    Krishnan, Ananthanarayan; Gandour, Jackson T

    2014-12-01

    Pitch is a robust perceptual attribute that plays an important role in speech, language, and music. As such, it provides an analytic window to evaluate how neural activity relevant to pitch undergo transformation from early sensory to later cognitive stages of processing in a well coordinated hierarchical network that is subject to experience-dependent plasticity. We review recent evidence of language experience-dependent effects in pitch processing based on comparisons of native vs. nonnative speakers of a tonal language from electrophysiological recordings in the auditory brainstem and auditory cortex. We present evidence that shows enhanced representation of linguistically-relevant pitch dimensions or features at both the brainstem and cortical levels with a stimulus-dependent preferential activation of the right hemisphere in native speakers of a tone language. We argue that neural representation of pitch-relevant information in the brainstem and early sensory level processing in the auditory cortex is shaped by the perceptual salience of domain-specific features. While both stages of processing are shaped by language experience, neural representations are transformed and fundamentally different at each biological level of abstraction. The representation of pitch relevant information in the brainstem is more fine-grained spectrotemporally as it reflects sustained neural phase-locking to pitch relevant periodicities contained in the stimulus. In contrast, the cortical pitch relevant neural activity reflects primarily a series of transient temporal neural events synchronized to certain temporal attributes of the pitch contour. We argue that experience-dependent enhancement of pitch representation for Chinese listeners most likely reflects an interaction between higher-level cognitive processes and early sensory-level processing to improve representations of behaviorally-relevant features that contribute optimally to perception. It is our view that long

  3. Towards label-free evaluation of oxidative stress in human skin exposed to sun filters (Conference Presentation)

    Science.gov (United States)

    Osseiran, Sam; Wang, Hequn; Suita, Yusuke; Roider, Elisabeth; Fisher, David E.; Evans, Conor L.

    2016-02-01

    Skin cancer, including basal cell carcinoma, squamous cell carcinoma, and melanoma, is the most common form of cancer in North America. Paradoxically, skin cancer incidence is steadily on the rise even despite the growing use of sunscreens over the past decades. One potential explanation for this discrepancy involves the sun filters in sunscreen, which are responsible for blocking harmful ultraviolet radiation. It is proposed that these agents may produce reactive oxygen species (ROS) at the site of application, thereby generating oxidative stress in skin that gives rise to genetic mutations, which may explain the rising incidence of skin cancer. To test this hypothesis, ex vivo human skin was treated with five common chemical sun filters (avobenzone, octocrylene, homosalate, octisalate, and oxybenzone) as well as two physical sun filters (zinc oxide compounds), both with and without UV irradiation. To non-invasively evaluate oxidative stress, two-photon excitation fluorescence (2PEF) and fluorescence lifetime imaging microscopy (FLIM) of the skin samples were used to monitor levels of NADH and FAD, two key cofactors in cellular redox metabolism. The relative redox state of the skin was assessed based on the fluorescence intensities and lifetimes of these endogenous cofactors. While the sun filters were indeed shown to have a protective effect from UV radiation, it was observed that they also generate oxidative stress in skin, even in the absence of UV light. These results suggest that sun filter induced ROS production requires more careful study, especially in how these reactive species impact the rise of skin cancer.

  4. The impact of air pollution from used ventilation filters on human comfort and health

    DEFF Research Database (Denmark)

    Clausen, Geo; Alm, O.; Fanger, Povl Ole

    2002-01-01

    The comfort and health of 30 women was studied during 4 hours´ exposure in an experimental room with either a used or a new filter present in the ventilation system. All other environmental parameters were kept constant. The presence of the used filter in the ventilation system had a significant...

  5. Interaction of speech and script in human auditory cortex: insights from neuro-imaging and effective connectivity.

    Science.gov (United States)

    van Atteveldt, Nienke; Roebroeck, Alard; Goebel, Rainer

    2009-12-01

    In addition to visual information from the face of the speaker, a less natural, but nowadays extremely important visual component of speech is its representation in script. In this review, neuro-imaging studies are examined which were aimed to understand how speech and script are associated in the adult "literate" brain. The reviewed studies focused on the role of different stimulus and task factors and effective connectivity between different brain regions. The studies will be summarized in a neural mechanism for the integration of speech and script that can serve as a basis for future studies addressing (the failure of) literacy acquisition. In this proposed mechanism, speech sound processing in auditory cortex is modulated by co-presented visual letters, depending on the congruency of the letter-sound pairs. Other factors of influence are temporal correspondence, input quality and task instruction. We present results showing that the modulation of auditory cortex is most likely mediated by feedback from heteromodal areas in the superior temporal cortex, but direct influences from visual cortex are not excluded. The influence of script on speech sound processing occurs automatically and shows extended development during reading acquisition. This review concludes with suggestions to answer currently still open questions to get closer to understanding the neural basis of normal and impaired literacy.

  6. Sexual dimorphism of the lateral angle of the internal auditory canal and its potential for sex estimation of burned human skeletal remains.

    Science.gov (United States)

    Gonçalves, David; Thompson, Tim J U; Cunha, Eugénia

    2015-09-01

    The potential of the petrous bone for sex estimation has been recurrently investigated in the past because it is very resilient and therefore tends to preserve rather well. The sexual dimorphism of the lateral angle of the internal auditory canal was investigated in two samples of cremated Portuguese individuals in order to assess its usefulness for sex estimation in burned remains. These comprised the cremated petrous bones from fleshed cadavers (N = 54) and from dry and disarticulated bones (N = 36). Although differences between males and females were more patent in the sample of skeletons, none presented a very significant sexual dimorphism, thus precluding any attempt of sex estimation. This may have been the result of a difficult application of the method and of a differential impact of heat-induced warping which is known to be less frequent in cremains from dry skeletons. Results suggest that the lateral angle method cannot be applied to burned human skeletal remains.

  7. Auditory object salience: human cortical processing of non-biological action sounds and their acoustic signal attributes.

    Science.gov (United States)

    Lewis, James W; Talkington, William J; Tallaksen, Katherine C; Frum, Chris A

    2012-01-01

    Whether viewed or heard, an object in action can be segmented as a distinct salient event based on a number of different sensory cues. In the visual system, several low-level attributes of an image are processed along parallel hierarchies, involving intermediate stages wherein gross-level object form and/or motion features are extracted prior to stages that show greater specificity for different object categories (e.g., people, buildings, or tools). In the auditory system, though relying on a rather different set of low-level signal attributes, meaningful real-world acoustic events and "auditory objects" can also be readily distinguished from background scenes. However, the nature of the acoustic signal attributes or gross-level perceptual features that may be explicitly processed along intermediate cortical processing stages remain poorly understood. Examining mechanical and environmental action sounds, representing two distinct non-biological categories of action sources, we had participants assess the degree to which each sound was perceived as object-like versus scene-like. We re-analyzed data from two of our earlier functional magnetic resonance imaging (fMRI) task paradigms (Engel et al., 2009) and found that scene-like action sounds preferentially led to activation along several midline cortical structures, but with strong dependence on listening task demands. In contrast, bilateral foci along the superior temporal gyri (STG) showed parametrically increasing activation to action sounds rated as more "object-like," independent of sound category or task demands. Moreover, these STG regions also showed parametric sensitivity to spectral structure variations (SSVs) of the action sounds-a quantitative measure of change in entropy of the acoustic signals over time-and the right STG additionally showed parametric sensitivity to measures of mean entropy and harmonic content of the environmental sounds. Analogous to the visual system, intermediate stages of the

  8. Multiple Human Tracking Using Particle Filter with Gaussian Process Dynamical Model

    Directory of Open Access Journals (Sweden)

    Wang Jing

    2008-01-01

    Full Text Available Abstract We present a particle filter-based multitarget tracking method incorporating Gaussian process dynamical model (GPDM to improve robustness in multitarget tracking. With the particle filter Gaussian process dynamical model (PFGPDM, a high-dimensional target trajectory dataset of the observation space is projected to a low-dimensional latent space in a nonlinear probabilistic manner, which will then be used to classify object trajectories, predict the next motion state, and provide Gaussian process dynamical samples for the particle filter. In addition, Histogram-Bhattacharyya, GMM Kullback-Leibler, and the rotation invariant appearance models are employed, respectively, and compared in the particle filter as complimentary features to coordinate data used in GPDM. The simulation results demonstrate that the approach can track more than four targets with reasonable runtime overhead and performance. In addition, it can successfully deal with occasional missing frames and temporary occlusion.

  9. Auditory model inversion and its application

    Institute of Scientific and Technical Information of China (English)

    ZHAO Heming; WANG Yongqi; CHEN Xueqin

    2005-01-01

    Auditory model has been applied to several aspects of speech signal processing field, and appears to be effective in performance. This paper presents the inverse transform of each stage of one widely used auditory model. First of all it is necessary to invert correlogram and reconstruct phase information by repetitious iterations in order to get auditory-nerve firing rate. The next step is to obtain the negative parts of the signal via the reverse process of the HWR (Half Wave Rectification). Finally the functions of inner hair cell/synapse model and Gammatone filters have to be inverted. Thus the whole auditory model inversion has been achieved. An application of noisy speech enhancement based on auditory model inversion algorithm is proposed. Many experiments show that this method is effective in reducing noise.Especially when SNR of noisy speech is low it is more effective than other methods. Thus this auditory model inversion method given in this paper is applicable to speech enhancement field.

  10. DETECTION OF VESSELS IN HUMAN FOREARMS USING 2D MATCHED FILTERING

    DEFF Research Database (Denmark)

    Savarimuthu, Thiusius Rajeeth; Sørensen, Anders Stengaard

    2008-01-01

    Detektering af blodårer ved brug af 2D matched filtering i billeder af underarmen. Billederne er produseret ved brug af nær infrarødt lys.......Detektering af blodårer ved brug af 2D matched filtering i billeder af underarmen. Billederne er produseret ved brug af nær infrarødt lys....

  11. Auditory Imagery: Empirical Findings

    Science.gov (United States)

    Hubbard, Timothy L.

    2010-01-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d)…

  12. Implicit learning of predictable sound sequences modulates human brain responses at different levels of the auditory hierarchy

    Directory of Open Access Journals (Sweden)

    Françoise eLecaignard

    2015-09-01

    Full Text Available Deviant stimuli, violating regularities in a sensory environment, elicit the Mismatch Negativity (MMN, largely described in the Event-Related Potential literature. While it is widely accepted that the MMN reflects more than basic change detection, a comprehensive description of mental processes modulating this response is still lacking. Within the framework of predictive coding, deviance processing is part of an inference process where prediction errors (the mismatch between incoming sensations and predictions established through experience are minimized. In this view, the MMN is a measure of prediction error, which yields specific expectations regarding its modulations by various experimental factors. In particular, it predicts that the MMN should decrease as the occurrence of a deviance becomes more predictable. We conducted a passive oddball EEG study and manipulated the predictability of sound sequences by means of different temporal structures. Importantly, our design allows comparing mismatch responses elicited by predictable and unpredictable violations of a simple repetition rule and therefore departs from previous studies that investigate violations of different time-scale regularities. We observed a decrease of the MMN with predictability and interestingly, a similar effect at earlier latencies, within 70 ms after deviance onset. Following these pre-attentive responses, a reduced P3a was measured in the case of predictable deviants. We conclude that early and late deviance responses reflect prediction errors, triggering belief updating within the auditory hierarchy. Beside, in this passive study, such perceptual inference appears to be modulated by higher-level implicit learning of sequence statistical structures. Our findings argue for a hierarchical model of auditory processing where predictive coding enables implicit extraction of environmental regularities.

  13. Gene expression profiling in human peripheral blood mononuclear cells using high-density filter-based cDNA microarrays.

    Science.gov (United States)

    Walker, J; Rigley, K

    2000-05-26

    Microarray technology has provided the ability to analyse the expression profiles for thousands of genes in parallel. The need for highly specialised equipment to use certain types of microarrays has restricted the application of this technology to a small number of dedicated laboratories. High-density filter-based cDNA microarrays provide a low-cost option for performing high-throughput gene expression analysis. We have used a model system in which filter-based cDNA microarrays representing over 4000 known human genes were used to monitor the kinetics of gene expression in human peripheral blood mononuclear cells (PBMCs) stimulated with phytohaemagluttinin (PHA). Using software-based cluster analysis, we identified 104 genes that altered in expression levels in response to PHA stimulation of PBMCs and showed that there was a considerable overlap between genes with similar temporal expression profiles and similar functional roles. Comparison of microarray quantitation with quantitative PCR showed almost identical expression profiles for a number of genes. Coupled with the fact that our findings are in agreement with a large number of independent observations, we conclude that the use of filter-based cDNA microarrays is a valid and accurate method for high-throughput gene expression profiling.

  14. Embolic capture with updated intra-aortic filter during coronary artery bypass grafting and transaortic transcatheter aortic valve implantation: first-in-human experience.

    Science.gov (United States)

    Ye, Jian; Webb, John G

    2014-12-01

    We report our first-in-human clinical experience in the use of the new version of the EMBOL-X intra-aortic filter (Edwards Lifesciences Corporation, Irvine, Calif) to capture embolic material during transaortic transcatheter aortic valve implantation and cardiac surgery. Five patients were enrolled into the first-in-human clinical assessment of the new version of the EMBOL-X intra-aortic filter. Three patients underwent coronary artery bypass grafting, and 2 patients underwent transaortic transcatheter aortic valve implantation. During coronary artery bypass grafting, the filter was deployed before clamping of the aorta and removal of the aortic clamp. In contrast, the filter was deployed before aortic puncture for transaortic transcatheter aortic valve implantation and kept in the aorta throughout the entire procedure. The filter introducer sheath and filter were easily placed and removed without difficulty. There were no complications related to the use of the filter. Postoperative examination of the retrieved filters revealed the presence of multiple microemboli in the filters from all 5 cases. Histologic study revealed various kinds of tissue and thrombus. This first-in-human clinical experience has demonstrated the safety and feasibility of using the new version of the EMBOL-X intra-aortic filter during either cardiac surgery or transaortic transcatheter aortic valve implantation. We believe that the combination of the transaortic approach without aortic arch manipulation and the use of the EMBOL-X filter with a high capture rate is a promising strategy to reduce the incidence of embolic complications during transcatheter aortic valve implantation. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  15. The Nano-filters as the tools for the management of the water imbalance in the human society

    Science.gov (United States)

    Singh, R. P.; Kontar, V.

    2011-12-01

    ultra-thin nanoscale fibers, which filter out contaminants, plus active carbon granules, which kill bacteria. The carbon nano-tube as filters exhibit chemical-species selectivity with higher physical strength and higher temperature tolerance, more rugged process, more rapid filtration, regeneration via thermal means rather than physical removal and lowers costs. The nano-filters remove the toxic or unwanted bivalent ions (ions with 2 or more charges), such as lead, iron, nickel, mercury, etc. The nano-materials and nano-filters will help solve the problems of the water imbalance management in the human society. Therefore we are talking about some nano-applications on the session H138 "Imbalance of Water in Nature".

  16. Sentence Syntax and Content in the Human Temporal Lobe: An fMRI Adaptation Study in Auditory and Visual Modalities

    Energy Technology Data Exchange (ETDEWEB)

    Devauchelle, A.D.; Dehaene, S.; Pallier, C. [INSERM, Gif sur Yvette (France); Devauchelle, A.D.; Dehaene, S.; Pallier, C. [CEA, DSV, I2BM, NeuroSpin, F-91191 Gif Sur Yvette (France); Devauchelle, A.D.; Pallier, C. [Univ. Paris 11, Orsay (France); Oppenheim, C. [Univ Paris 05, Ctr Hosp St Anne, Paris (France); Rizzi, L. [Univ Siena, CISCL, I-53100 Siena (Italy); Dehaene, S. [Coll France, F-75231 Paris (France)

    2009-07-01

    Priming effects have been well documented in behavioral psycho-linguistics experiments: The processing of a word or a sentence is typically facilitated when it shares lexico-semantic or syntactic features with a previously encountered stimulus. Here, we used fMRI priming to investigate which brain areas show adaptation to the repetition of a sentence's content or syntax. Participants read or listened to sentences organized in series which could or not share similar syntactic constructions and/or lexico-semantic content. The repetition of lexico-semantic content yielded adaptation in most of the temporal and frontal sentence processing network, both in the visual and the auditory modalities, even when the same lexico-semantic content was expressed using variable syntactic constructions. No fMRI adaptation effect was observed when the same syntactic construction was repeated. Yet behavioral priming was observed at both syntactic and semantic levels in a separate experiment where participants detected sentence endings. We discuss a number of possible explanations for the absence of syntactic priming in the fMRI experiments, including the possibility that the conglomerate of syntactic properties defining 'a construction' is not an actual object assembled during parsing. (authors)

  17. Auditory imagery: empirical findings.

    Science.gov (United States)

    Hubbard, Timothy L

    2010-03-01

    The empirical literature on auditory imagery is reviewed. Data on (a) imagery for auditory features (pitch, timbre, loudness), (b) imagery for complex nonverbal auditory stimuli (musical contour, melody, harmony, tempo, notational audiation, environmental sounds), (c) imagery for verbal stimuli (speech, text, in dreams, interior monologue), (d) auditory imagery's relationship to perception and memory (detection, encoding, recall, mnemonic properties, phonological loop), and (e) individual differences in auditory imagery (in vividness, musical ability and experience, synesthesia, musical hallucinosis, schizophrenia, amusia) are considered. It is concluded that auditory imagery (a) preserves many structural and temporal properties of auditory stimuli, (b) can facilitate auditory discrimination but interfere with auditory detection, (c) involves many of the same brain areas as auditory perception, (d) is often but not necessarily influenced by subvocalization, (e) involves semantically interpreted information and expectancies, (f) involves depictive components and descriptive components, (g) can function as a mnemonic but is distinct from rehearsal, and (h) is related to musical ability and experience (although the mechanisms of that relationship are not clear).

  18. Auditory processing in fragile x syndrome.

    Science.gov (United States)

    Rotschafer, Sarah E; Razak, Khaleel A

    2014-01-01

    Fragile X syndrome (FXS) is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral, and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle response is seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning, and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable, and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional, and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  19. Auditory Processing in Fragile X Syndrome

    Directory of Open Access Journals (Sweden)

    Sarah E Rotschafer

    2014-02-01

    Full Text Available Fragile X syndrome (FXS is an inherited form of intellectual disability and autism. Among other symptoms, FXS patients demonstrate abnormalities in sensory processing and communication. Clinical, behavioral and electrophysiological studies consistently show auditory hypersensitivity in humans with FXS. Consistent with observations in humans, the Fmr1 KO mouse model of FXS also shows evidence of altered auditory processing and communication deficiencies. A well-known and commonly used phenotype in pre-clinical studies of FXS is audiogenic seizures. In addition, increased acoustic startle is also seen in the Fmr1 KO mice. In vivo electrophysiological recordings indicate hyper-excitable responses, broader frequency tuning and abnormal spectrotemporal processing in primary auditory cortex of Fmr1 KO mice. Thus, auditory hyper-excitability is a robust, reliable and translatable biomarker in Fmr1 KO mice. Abnormal auditory evoked responses have been used as outcome measures to test therapeutics in FXS patients. Given that similarly abnormal responses are present in Fmr1 KO mice suggests that cellular mechanisms can be addressed. Sensory cortical deficits are relatively more tractable from a mechanistic perspective than more complex social behaviors that are typically studied in autism and FXS. The focus of this review is to bring together clinical, functional and structural studies in humans with electrophysiological and behavioral studies in mice to make the case that auditory hypersensitivity provides a unique opportunity to integrate molecular, cellular, circuit level studies with behavioral outcomes in the search for therapeutics for FXS and other autism spectrum disorders.

  20. Non-auditory Effect of Noise Pollution and Its Risk on Human Brain Activity in Different Audio Frequency Using Electroencephalogram Complexity.

    Science.gov (United States)

    Allahverdy, Armin; Jafari, Amir Homayoun

    2016-10-01

    Noise pollution is one of the most harmful ambiance disturbances. It may cause many deficits in ability and activity of persons in the urban and industrial areas. It also may cause many kinds of psychopathies. Therefore, it is very important to measure the risk of this pollution in different area. This study was conducted in the Department of Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences from June to September of 2015, in which, different frequencies of noise pollution were played for volunteers. 16-channel EEG signal was recorded synchronously, then by using fractal dimension and relative power of Beta sub-band of EEG, the complexity of EEG signals was measured. As the results, it is observed that the average complexity of brain activity is increased in the middle of audio frequency range and the complexity map of brain activity changes in different frequencies, which can show the effects of frequency changes on human brain activity. The complexity of EEG is a good measure for ranking the annoyance and non-auditory risk of noise pollution on human brain activity.

  1. Human Arm Motion Tracking by Orientation-Based Fusion of Inertial Sensors and Kinect Using Unscented Kalman Filter.

    Science.gov (United States)

    Atrsaei, Arash; Salarieh, Hassan; Alasty, Aria

    2016-09-01

    Due to various applications of human motion capture techniques, developing low-cost methods that would be applicable in nonlaboratory environments is under consideration. MEMS inertial sensors and Kinect are two low-cost devices that can be utilized in home-based motion capture systems, e.g., home-based rehabilitation. In this work, an unscented Kalman filter approach was developed based on the complementary properties of Kinect and the inertial sensors to fuse the orientation data of these two devices for human arm motion tracking during both stationary shoulder joint position and human body movement. A new measurement model of the fusion algorithm was obtained that can compensate for the inertial sensors drift problem in high dynamic motions and also joints occlusion in Kinect. The efficiency of the proposed algorithm was evaluated by an optical motion tracker system. The errors were reduced by almost 50% compared to cases when either inertial sensor or Kinect measurements were utilized.

  2. Effectiveness of a New Lead-Shielding Device and Additional Filter for Reducing Staff and Patient Radiation Exposure During Videofluoroscopic Swallowing Study Using a Human Phantom.

    Science.gov (United States)

    Morishima, Yoshiaki; Chida, Koichi; Muroya, Yoshikazu; Utsumi, Yoshiya

    2017-09-18

    Interventional radiology procedures often involve lengthy exposure to fluoroscopy-derived radiation. We therefore devised a videofluoroscopic swallowing study (VFSS) procedure using a human phantom that proved to protect the patient and physician by reducing the radiation dose. We evaluated a new lead-shielding device and separately attached additional filters (1.0-, 2.0-, and 3.0-mm Al filters and a 0.5-mm Cu filter) during VFSS to reduce the patient's entrance skin dose (ESD). A monitor attached to the human phantom's neck measured the ESD. We also developed another lead shield (VFSS Shielding Box, 1.0-mm Pb equivalent) and tested its efficacy using the human phantom and an ionization chamber radiation survey meter with and without protection from scattered radiation at the physician's position on the phantom. We then measured the scattered radiation (at 90 and 150 cm above the floor) after combining the filters with the VFSS Shielding Box. With the additional filters, the ESD was reduced by 15.4-55.1%. With the VFSS Shielding Box alone, the scattered radiation was reduced by about 10% compared with the dose without additional shielding. With the VFSS Shielding Box and filters combined, the scattered radiation dose was reduced by a maximum of about 44% at the physician's position. Thus, the additional lead-shielding device effectively provided protection from scattered radiation during fluoroscopy. These results indicate that the combined VFSS Shielding Box and filters can effectively reduce the physician's and patient's radiation doses.

  3. Fake or fantasy: rapid dissociation between strategic content monitoring and reality filtering in human memory.

    Science.gov (United States)

    Wahlen, Aurélie; Nahum, Louis; Gabriel, Damien; Schnider, Armin

    2011-11-01

    Memory verification is crucial for meaningful behavior. Orbitofrontal damage may impair verification and induce confabulation and inappropriate acts. The strategic retrieval account explains this state by deficient monitoring of memories' precise content, whereas the reality filter hypothesis explains it by a failure of an orbitofrontal mechanism suppressing the interference of memories that do not pertain to reality. The distinctiveness of these mechanisms has recently been questioned. Here, we juxtaposed these 2 mechanisms using high-resolution evoked potentials in healthy subjects who performed 2 runs of a continuous recognition task which contained pictures that precisely matched or only resembled previous pictures. We found behavioral and electrophysiological dissociation: Strategic content monitoring was maximally challenged by stimuli resembling previous ones, whereas reality filtering was maximally challenged by identical stimuli. Evoked potentials dissociated at 200-300 ms: Strategic monitoring induced a strong frontal negativity and a distinct cortical map configuration, which were particularly weakly expressed in reality filtering. Recognition of real repetitions was expressed at 300-400 ms, associated with ventromedial prefrontal activation. Thus, verification of a memory's concordance with the past (its content) dissociates from the verification of its concordance with the present. The role of these memory control mechanisms in the generation of confabulations and disorientation is discussed.

  4. The Harmonic Organization of Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Xiaoqin eWang

    2013-12-01

    Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  5. Perspectives on the design of musical auditory interfaces

    OpenAIRE

    Leplatre, G.; Brewster, S.A.

    1998-01-01

    This paper addresses the issue of music as a communication medium in auditory human-computer interfaces. So far, psychoacoustics has had a great influence on the development of auditory interfaces, directly and through music cognition. We suggest that a better understanding of the processes involved in the perception of actual musical excerpts should allow musical auditory interface designers to exploit the communicative potential of music. In this respect, we argue that the real advantage of...

  6. Decreases in energy and increases in phase locking of event-related oscillations to auditory stimuli occur during adolescence in human and rodent brain.

    Science.gov (United States)

    Ehlers, Cindy L; Wills, Derek N; Desikan, Anita; Phillips, Evelyn; Havstad, James

    2014-01-01

    Synchrony of phase (phase locking) of event-related oscillations (EROs) within and between different brain areas has been suggested to reflect communication exchange between neural networks and as such may be a sensitive and translational measure of changes in brain remodeling that occur during adolescence. This study sought to investigate developmental changes in EROs using a similar auditory event-related potential (ERP) paradigm in both rats and humans. Energy and phase variability of EROs collected from 38 young adult men (aged 18-25 years), 33 periadolescent boys (aged 10-14 years), 15 male periadolescent rats [at postnatal day (PD) 36] and 19 male adult rats (at PD103) were investigated. Three channels of ERP data (frontal cortex, central cortex and parietal cortex) were collected from the humans using an 'oddball plus noise' paradigm that was presented under passive (no behavioral response required) conditions in the periadolescents and under active conditions (where each subject was instructed to depress a counter each time he detected an infrequent target tone) in adults and adolescents. ERPs were recorded in rats using only the passive paradigm. In order to compare the tasks used in rats to those used in humans, we first studied whether three ERO measures [energy, phase locking index (PLI) within an electrode site and phase difference locking index (PDLI) between different electrode sites] differentiated the 'active' from 'passive' ERP tasks. Secondly, we explored our main question of whether the three ERO measures differentiated adults from periadolescents in a similar manner in both humans and rats. No significant changes were found in measures of ERO energy between the active and passive tasks in the periadolescent human participants. There was a smaller but significant increase in PLI but not PDLI as a function of active task requirements. Developmental differences were found in energy, PLI and PDLI values between the periadolescents and adults in

  7. Auditory function in the Tc1 mouse model of down syndrome suggests a limited region of human chromosome 21 involved in otitis media.

    Directory of Open Access Journals (Sweden)

    Stephanie Kuhn

    Full Text Available Down syndrome is one of the most common congenital disorders leading to a wide range of health problems in humans, including frequent otitis media. The Tc1 mouse carries a significant part of human chromosome 21 (Hsa21 in addition to the full set of mouse chromosomes and shares many phenotypes observed in humans affected by Down syndrome with trisomy of chromosome 21. However, it is unknown whether Tc1 mice exhibit a hearing phenotype and might thus represent a good model for understanding the hearing loss that is common in Down syndrome. In this study we carried out a structural and functional assessment of hearing in Tc1 mice. Auditory brainstem response (ABR measurements in Tc1 mice showed normal thresholds compared to littermate controls and ABR waveform latencies and amplitudes were equivalent to controls. The gross anatomy of the middle and inner ears was also similar between Tc1 and control mice. The physiological properties of cochlear sensory receptors (inner and outer hair cells: IHCs and OHCs were investigated using single-cell patch clamp recordings from the acutely dissected cochleae. Adult Tc1 IHCs exhibited normal resting membrane potentials and expressed all K(+ currents characteristic of control hair cells. However, the size of the large conductance (BK Ca(2+ activated K(+ current (I(K,f, which enables rapid voltage responses essential for accurate sound encoding, was increased in Tc1 IHCs. All physiological properties investigated in OHCs were indistinguishable between the two genotypes. The normal functional hearing and the gross structural anatomy of the middle and inner ears in the Tc1 mouse contrast to that observed in the Ts65Dn model of Down syndrome which shows otitis media. Genes that are trisomic in Ts65Dn but disomic in Tc1 may predispose to otitis media when an additional copy is active.

  8. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  9. Binding of (/sup 3/H)imipramine to human platelet membranes with compensation for saturable binding to filters and its implication for binding studies with brain membranes

    Energy Technology Data Exchange (ETDEWEB)

    Phillips, O.M.; Wood, K.M.; Williams, D.C.

    1984-08-01

    Apparent specific binding of (/sup 3/H)imipramine to human platelet membranes at high concentrations of imipramine showed deviation from that expected of a single binding site, a result consistent with a low-affinity binding site. The deviation was due to displaceable, saturable binding to the glass fibre filters used in the assays. Imipramine, chloripramine, desipramine, and fluoxetine inhibited binding to filters whereas 5-hydroxytryptamine and ethanol were ineffective. Experimental conditions were developed that eliminated filter binding, allowing assay of high- and low-affinity binding to membranes. Failure to correct for filter binding may lead to overestimation of binding parameters, Bmax and KD for high-affinity binding to membranes, and may also be misinterpreted as indicating a low-affinity binding component in both platelet and brain membranes. Low-affinity binding (KD less than 2 microM) of imipramine to human platelet membranes was demonstrated and its significance discussed.

  10. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  11. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  12. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  13. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes.

  14. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  15. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  16. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina

    2014-02-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians' subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model in which to study mechanisms of experience-dependent changes in human auditory function.

  17. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  18. Assessment of intermittent UMTS electromagnetic field effects on blood circulation in the human auditory region using a near-infrared system.

    Science.gov (United States)

    Spichtig, Sonja; Scholkmann, Felix; Chin, Lydia; Lehmann, Hugo; Wolf, Martin

    2012-01-01

    The aim of the present study was to assess the potential effects of intermittent Universal Mobile Telecommunications System electromagnetic fields (UMTS-EMF) on blood circulation in the human head (auditory region) using near-infrared spectroscopy (NIRS) on two different timescales: short-term (effects occurring within 80 s) and medium-term (effects occurring within 80 s to 30 min). For the first time, we measured potential immediate effects of UMTS-EMF in real-time without any interference during exposure. Three different exposures (sham, 0.18 W/kg, and 1.8 W/kg) were applied in a controlled, randomized, crossover, and double-blind paradigm on 16 healthy volunteers. In addition to oxy-, deoxy-, and total haemoglobin concentrations ([O(2) Hb], [HHb], and [tHb], respectively), the heart rate (HR), subjective well-being, tiredness, and counting speed were recorded. During exposure to 0.18 W/kg, we found a significant short-term increase in Δ[O(2) Hb] and Δ[tHb], which is small (≈17%) compared to a functional brain activation. A significant decrease in the medium-term response of Δ[HHb] at 0.18 and 1.8 W/kg exposures was detected, which is in the range of physiological fluctuations. The medium-term ΔHR was significantly higher (+1.84 bpm) at 1.8 W/kg than for sham exposure. The other parameters showed no significant effects. Our results suggest that intermittent exposure to UMTS-EMF has small short- and medium-term effects on cerebral blood circulation and HR. Copyright © 2011 Wiley Periodicals, Inc.

  19. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Nielsen, Hanne Mørck; Rassing, M R

    1999-01-01

    The aim of the present study was to evaluate the TR146 cell culture model as an in vitro model of human buccal epithelium with respect to the permeability enhancement by different pH values, different osmolality values or bile salts. For this purpose, the increase in the apparent permeability (P...... for efficacy studies and mechanistic studies of enhancers with potential use in human buccal drug delivery....

  20. Statistical representation of sound textures in the impaired auditory system

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; Dau, Torsten

    2015-01-01

    Many challenges exist when it comes to understanding and compensating for hearing impairment. Traditional methods, such as pure tone audiometry and speech intelligibility tests, offer insight into the deficiencies of a hearingimpaired listener, but can only partially reveal the mechanisms...... that underlie the hearing loss. An alternative approach is to investigate the statistical representation of sounds for hearing-impaired listeners along the auditory pathway. Using models of the auditory periphery and sound synthesis, we aimed to probe hearing impaired perception for sound textures – temporally...... homogenous sounds such as rain, birds, or fire. It has been suggested that sound texture perception is mediated by time-averaged statistics measured from early auditory representations (McDermott et al., 2013). Changes to early auditory processing, such as broader “peripheral” filters or reduced compression...

  1. 听力筛查转诊婴幼儿不同滤波条件短纯音诱发听性脑干反应的比较%Comparison of tone burst evoked auditory brainstem responses with different filter settings for referral infants after hearing screening

    Institute of Scientific and Technical Information of China (English)

    刁文雯; 倪道凤; 李奉荣; 商莹莹

    2011-01-01

    Objective Auditory brainstem responses (ABR) evoked by tone burst is an important method of hearing assessment in referral infants after hearing screening. The present study was to compare the thresholds of tone burst ABR with filter settings of 30 - 1500 Hz and 30 - 3000 Hz at each frequency,figure out the characteristics of ABR thresholds with the two filter settings and the effect of the waveform judgement, so as to select a more optimal frequency specific ABR test parameter. Methods Thresholds with filter settings of 30 - 1500 Hz and 30 -3000 Hz in children aged 2 -33 months were recorded by click,tone burst ABR. A total of 18 patients ( 8 male / 10 female), 22 ears were included. Results The thresholds of tone burst ABR with filter settings of 30 - 3000 Hz were higher than that with filter settings of 30 - 1500 Hz. Significant difference was detected for that at 0. 5 kHz and 2.0 kHz ( t values were 2.238 and 2. 217, P < 0. 05 ), no significant difference between the two filter settings was detected at the rest frequencies tone evoked ABR thresholds. The waveform of ABR with filter settings of 30 - 1500 Hz was smoother than that with filter settings of 30 - 3000 Hz at the same stimulus intensity. Response curve of the latter appeared jagged small interfering wave. Conclusions The filter setting of 30 - 1500 Hz may be a more optimal parameter of frequency specific ABR to improve the accuracy of frequency specificity ABR for infants' hearing assessment.%目的 短纯音诱发听性脑干反应(ABR)是听力筛查转诊婴幼儿听力评估的重要方法,本研究比较滤波分别为30~1500 Hz与30~3000 Hz时不同频率短纯音ABR阈值之间的差异,总结两种滤波条件下ABR波形特点及对阈值判断的影响,以选择更优化的频率特异性ABR测试参数.方法 应用美国IHS公司SmartEP听觉诱发电位仪记录18例(22耳)2~33月龄婴幼儿短声、滤波为30~1500 Hz与30~3000 Hz短纯音ABR各频率反应阈.结果 0.5 k

  2. Psychology of auditory perception.

    Science.gov (United States)

    Lotto, Andrew; Holt, Lori

    2011-09-01

    Audition is often treated as a 'secondary' sensory system behind vision in the study of cognitive science. In this review, we focus on three seemingly simple perceptual tasks to demonstrate the complexity of perceptual-cognitive processing involved in everyday audition. After providing a short overview of the characteristics of sound and their neural encoding, we present a description of the perceptual task of segregating multiple sound events that are mixed together in the signal reaching the ears. Then, we discuss the ability to localize the sound source in the environment. Finally, we provide some data and theory on how listeners categorize complex sounds, such as speech. In particular, we present research on how listeners weigh multiple acoustic cues in making a categorization decision. One conclusion of this review is that it is time for auditory cognitive science to be developed to match what has been done in vision in order for us to better understand how humans communicate with speech and music. WIREs Cogni Sci 2011 2 479-489 DOI: 10.1002/wcs.123 For further resources related to this article, please visit the WIREs website. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Modulation of human global/local perception by low spatial frequency filtering

    Institute of Scientific and Technical Information of China (English)

    HAN Shihui; J. A. Weaver; S. O. Murray; KANG Xiaojian; E. W. Yund; D. L. Woods

    2003-01-01

    We investigated the effect of low spatial frequency (SF) filtering on neural substrates underlying global and local processing in the peripheral vision by measuring hemodynamic responses with functional magnetic resonance imaging (fMRI). Subjects identified global or local shapes of compound letters that were either broadband in spatial- frequency spectrum or contrast balanced (CB) to removed low SFs and displayed randomly in the left or right visual fields. Attention to both broadband and CB global shapes generated stronger activation over the medial occipital cortex relative to local attention. Lateralized activations in association with global processing were observed over the right temporal-parietal junction for broadband stimuli whereas over the right fusiform gyrus for CB stimuli. Attention to CB local shapes resulted in activations in the medial frontal cortex, bilateral inferior frontal and superior temporal cortices. The results were discussed in terms of the competition between global and local information in determining brain activations in association with global/local processing of compound stimuli.

  4. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  5. Auditory Responses of Infants

    Science.gov (United States)

    Watrous, Betty Springer; And Others

    1975-01-01

    Forty infants, 3- to 12-months-old, participated in a study designed to differentiate the auditory response characteristics of normally developing infants in the age ranges 3 - 5 months, 6 - 8 months, and 9 - 12 months. (Author)

  6. Filter arrays

    Energy Technology Data Exchange (ETDEWEB)

    Page, Ralph H.; Doty, Patrick F.

    2017-08-01

    The various technologies presented herein relate to a tiled filter array that can be used in connection with performance of spatial sampling of optical signals. The filter array comprises filter tiles, wherein a first plurality of filter tiles are formed from a first material, the first material being configured such that only photons having wavelengths in a first wavelength band pass therethrough. A second plurality of filter tiles is formed from a second material, the second material being configured such that only photons having wavelengths in a second wavelength band pass therethrough. The first plurality of filter tiles and the second plurality of filter tiles can be interspersed to form the filter array comprising an alternating arrangement of first filter tiles and second filter tiles.

  7. Investigation of potential effects of cellular phones on human auditory function by means of distortion product otoacoustic emissions.

    Science.gov (United States)

    Janssen, Thomas; Boege, Paul; von Mikusch-Buchberg, Jutta; Raczek, Johannes

    2005-03-01

    Outer hair cells (OHC) are thought to act like piezoelectric transducers that amplify low sounds and hence enable the ear's exquisite sensitivity. Distortion product otoacoustic emissions (DPOAE) reflect OHC function. The present study investigated potential effects of electromagnetic fields (EMF) of GSM (Global System for Mobile Communication) cellular phones on OHCs by means of DPOAEs. DPOAE measurements were performed during exposure, i.e., between consecutive GSM signal pulses, and during sham exposure (no EMF) in 28 normally hearing subjects at tone frequencies around 4 kHz. For a reliable DPOAE measurement, a 900-MHz GSM-like signal was used where transmission pause was increased from 4.034 ms (GSM standard) to 24.204 ms. Peak transmitter power was set to 20 W, corresponding to a specific absorption rate (SAR) of 0.1 W/kg. No significant change in the DPOAE level in response to the EMF exposure was found. However, when undesired side effects on DPOAEs were compensated, in some subjects an extremely small EMF-exposure-correlated change in the DPOAE level (< 1 dB) was observed. In view of the very large dynamic range of hearing in humans (120 dB), it is suggested that this observation is physiologically irrelevant.

  8. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Nielsen, Hanne Mørck; Verhoef, J C; Ponec, M

    1999-01-01

    The aim of the present study was to characterize the TR146 cell culture model as an in vitro model of human buccal epithelium with respect to the permeability of test substances with different molecular weights (M(w)). For this purpose, the apparent permeability (P(app)) values for mannitol...... and for fluorescein isothiocyanate (FITC)-labelled dextrans (FD) with various M(w) (4000-40000) were compared to the P(app) values obtained using porcine buccal mucosa as an in vitro model of the human buccal epithelium. The effect of 10 mM sodium glycocholate (GC) on the P(app) values was examined. To identify...... cell culture model is a suitable in vitro model for mechanistic permeability studies of human buccal drug permeability....

  9. TR146 cells grown on filters as a model of human buccal epithelium

    DEFF Research Database (Denmark)

    Mørck Nielsen, H; Rømer Rassing, M; Nielsen, Hanne Mørck

    2000-01-01

    The objective of the present study was to characterise the TR146 cell culture model as an in vitro model of human buccal mucosa with respect to the enzyme activity in the tissues. For this purpose, the contents of aminopeptidase, carboxypeptidase and esterase in homogenate supernatants of the TR146...... cell culture model, and human and porcine buccal epithelium were compared. The esterase activity in the intact cell culture model and in the porcine buccal mucosa was compared. Further, the TR146 cell culture model was used to study the permeability rate and metabolism of leu-enkephalin. The activity...... of the three enzymes in the TR146 homogenate supernatants was in the same range as the activity in homogenate supernatants of human buccal epithelium. In the TR146 cell culture model, the activity of aminopeptidase (13.70+/-2.10 nmol/min per mg protein) was approx. four times the activity of carboxypeptidase...

  10. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  11. Crux vena cava filter.

    Science.gov (United States)

    Murphy, Erin H; Johnson, Eric D; Kopchok, George E; Fogarty, Thomas J; Arko, Frank R

    2009-09-01

    Inferior vena cava filters are widely accepted for pulmonary embolic prophylaxis in high-risk patients with contraindications to anticoagulation. While long-term complications have been associated with permanent filters, retrievable filters are now available and have resulted in the rapid expansion of this technology. Nonetheless, complications are still reported with optional filters. Furthermore, device tilting and thrombus load may prevent retrieval in up to 30% of patients, thereby eliminating the benefits of this technology. The Crux vena cava filter is a novel, self-centering, low-profile filter that is designed for ease of delivery, retrievability and improved efficacy while limiting fatigue-related device complications. This device has been proven safe and user-friendly in an ovine model and has recently been implanted in human subjects.

  12. CrowdFilter

    DEFF Research Database (Denmark)

    Mortensen, Michael Lind; Wallace, Byron C.; Kraska, Tim

    for complex multi-criteria search problems through crowdsourcing. The CrowdFilter system is capable of supporting both criteria-level labels and n-gram rationales, capturing the human decision making process behind each filtering choice. Using the data provided through CrowdFilter we also introduce a novel......Multi-criteria filtering of mixed open/closed-world data is a time-consuming task, requiring significant manual effort when latent open-world attributes are present. In this work we introduce a novel open-world filtering framework CrowdFilter, enabling automatic UI generation and label elicitation...... multi-criteria active learning method; capable of incorporating labels and n-gram rationales per inclusion criteria, and thus capable of determining both clear includes/excludes, as well as complex borderline cases. By incorporating the active learning approach into the elicitation process of Crowd...

  13. Response recovery in the locust auditory pathway.

    Science.gov (United States)

    Wirtssohn, Sarah; Ronacher, Bernhard

    2016-01-01

    Temporal resolution and the time courses of recovery from acute adaptation of neurons in the auditory pathway of the grasshopper Locusta migratoria were investigated with a response recovery paradigm. We stimulated with a series of single click and click pair stimuli while performing intracellular recordings from neurons at three processing stages: receptors and first and second order interneurons. The response to the second click was expressed relative to the single click response. This allowed the uncovering of the basic temporal resolution in these neurons. The effect of adaptation increased with processing layer. While neurons in the auditory periphery displayed a steady response recovery after a short initial adaptation, many interneurons showed nonlinear effects: most prominent a long-lasting suppression of the response to the second click in a pair, as well as a gain in response if a click was preceded by a click a few milliseconds before. Our results reveal a distributed temporal filtering of input at an early auditory processing stage. This set of specified filters is very likely homologous across grasshopper species and thus forms the neurophysiological basis for extracting relevant information from a variety of different temporal signals. Interestingly, in terms of spike timing precision neurons at all three processing layers recovered very fast, within 20 ms. Spike waveform analysis of several neuron types did not sufficiently explain the response recovery profiles implemented in these neurons, indicating that temporal resolution in neurons located at several processing layers of the auditory pathway is not necessarily limited by the spike duration and refractory period.

  14. Determination of parabens and benzophenone-type UV filters in human placenta. First description of the existence of benzyl paraben and benzophenone-4.

    Science.gov (United States)

    Valle-Sistac, Jennifer; Molins-Delgado, Daniel; Díaz, Marta; Ibáñez, Lourdes; Barceló, Damià; Silvia Díaz-Cruz, M

    2016-03-01

    UV filters and parabens (PBs) are chemicals used in daily personal care and hygiene products to protect materials and humans from the adverse effects of UV radiation and to preserve the integrity of the formulation, respectively. Several studies highlight their widespread environmental occurrence and endocrine disrupting effects. However, little is known about human exposure to these compounds. The objective of this study was to investigate the exposure of human embryos and foetuses to endocrine disrupting UV filters and PBs. Placentas from volunteer mothers in Barcelona were collected at delivery after informed, written consent by the pregnant women. UV filters and parabens were analysed by liquid chromatography-tandem mass spectrometry. The excellent performance of the method allowed measuring the target compounds in human placental tissue at low ng/g fresh weight level. The detection frequency of the selected compounds was in the range 17-100%. Benzophenone-1, methyl paraben, butyl paraben and benzyl paraben were detected in all samples. The highest measured concentration corresponded to methyl paraben, 11.77ng/g fresh weight. Reported concentrations of benzophenone-4 and benzyl paraben constitute the first evidence about their accumulation in placenta. The results obtained corroborate that foetuses are exposed to a wide diversity of UV filters and PBs via the placenta. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. A new technique to characterize CT scanner bow-tie filter attenuation and applications in human cadaver dosimetry simulations.

    Science.gov (United States)

    Li, Xinhua; Shi, Jim Q; Zhang, Da; Singh, Sarabjeet; Padole, Atul; Otrakji, Alexi; Kalra, Mannudeep K; Xu, X George; Liu, Bob

    2015-11-01

    To present a noninvasive technique for directly measuring the CT bow-tie filter attenuation with a linear array x-ray detector. A scintillator based x-ray detector of 384 pixels, 307 mm active length, and fast data acquisition (model X-Scan 0.8c4-307, Detection Technology, FI-91100 Ii, Finland) was used to simultaneously detect radiation levels across a scan field-of-view. The sampling time was as short as 0.24 ms. To measure the body bow-tie attenuation on a GE Lightspeed Pro 16 CT scanner, the x-ray tube was parked at the 12 o'clock position, and the detector was centered in the scan field at the isocenter height. Two radiation exposures were made with and without the bow-tie in the beam path. Each readout signal was corrected for the detector background offset and signal-level related nonlinear gain, and the ratio of the two exposures gave the bow-tie attenuation. The results were used in the geant4 based simulations of the point doses measured using six thimble chambers placed in a human cadaver with abdomen/pelvis CT scans at 100 or 120 kV, helical pitch at 1.375, constant or variable tube current, and distinct x-ray tube starting angles. Absolute attenuation was measured with the body bow-tie scanned at 80-140 kV. For 24 doses measured in six organs of the cadaver, the median or maximum difference between the simulation results and the measurements on the CT scanner was 8.9% or 25.9%, respectively. The described method allows fast and accurate bow-tie filter characterization.

  16. A new technique to characterize CT scanner bow-tie filter attenuation and applications in human cadaver dosimetry simulations

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xinhua; Shi, Jim Q.; Zhang, Da; Singh, Sarabjeet; Padole, Atul; Otrakji, Alexi; Kalra, Mannudeep K.; Liu, Bob, E-mail: bliu7@mgh.harvard.edu [Department of Radiology, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Xu, X. George [Nuclear Engineering Program, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)

    2015-11-15

    Purpose: To present a noninvasive technique for directly measuring the CT bow-tie filter attenuation with a linear array x-ray detector. Methods: A scintillator based x-ray detector of 384 pixels, 307 mm active length, and fast data acquisition (model X-Scan 0.8c4-307, Detection Technology, FI-91100 Ii, Finland) was used to simultaneously detect radiation levels across a scan field-of-view. The sampling time was as short as 0.24 ms. To measure the body bow-tie attenuation on a GE Lightspeed Pro 16 CT scanner, the x-ray tube was parked at the 12 o’clock position, and the detector was centered in the scan field at the isocenter height. Two radiation exposures were made with and without the bow-tie in the beam path. Each readout signal was corrected for the detector background offset and signal-level related nonlinear gain, and the ratio of the two exposures gave the bow-tie attenuation. The results were used in the GEANT4 based simulations of the point doses measured using six thimble chambers placed in a human cadaver with abdomen/pelvis CT scans at 100 or 120 kV, helical pitch at 1.375, constant or variable tube current, and distinct x-ray tube starting angles. Results: Absolute attenuation was measured with the body bow-tie scanned at 80–140 kV. For 24 doses measured in six organs of the cadaver, the median or maximum difference between the simulation results and the measurements on the CT scanner was 8.9% or 25.9%, respectively. Conclusions: The described method allows fast and accurate bow-tie filter characterization.

  17. Auditory hallucinations induced by trazodone.

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-04-03

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients.

  18. Cooperative dynamics in auditory brain response

    CERN Document Server

    Kwapien, J; Liu, L C; Ioannides, A A

    1998-01-01

    Simultaneous estimates of the activity in the left and right auditory cortex of five normal human subjects were extracted from Multichannel Magnetoencephalography recordings. Left, right and binaural stimulation were used, in separate runs, for each subject. The resulting time-series of left and right auditory cortex activity were analysed using the concept of mutual information. The analysis constitutes an objective method to address the nature of inter-hemispheric correlations in response to auditory stimulations. The results provide a clear evidence for the occurrence of such correlations mediated by a direct information transport, with clear laterality effects: as a rule, the contralateral hemisphere leads by 10-20ms, as can be seen in the average signal. The strength of the inter-hemispheric coupling, which cannot be extracted from the average data, is found to be highly variable from subject to subject, but remarkably stable for each subject.

  19. A map of periodicity orthogonal to frequency representation in the cat auditory cortex

    Directory of Open Access Journals (Sweden)

    Gerald Langner

    2009-11-01

    Full Text Available Harmonic sounds, such as voiced speech sounds and many animal communication signals, are characterized by a pitch related to the periodicity of their envelopes. While frequency information is extracted by mechanical filtering of the cochlea, periodicity information is analyzed by temporal filter mechanisms in the brainstem. In the mammalian auditory midbrain envelope periodicity is represented in maps orthogonal to the representation of sound frequency. However, how periodicity is represented across the cortical surface of primary auditory cortex remains controversial. Using optical recording of intrinsic signals, we here demonstrate that a periodicity map exists in primary auditory cortex (AI of the cat. While pure tone stimulation confirmed the well-known frequency gradient along the rostro-caudal axis of AI, stimulation with harmonic sounds revealed segregated bands of activation, indicating spatially localized preferences to specific periodicities along a dorso-ventral axis, nearly orthogonal to the tonotopic gradient. Analysis of the response locations revealed an average gradient of -100° ± 10° for the periodotopic, and –12°±18° for the tonotopic map resulting in a mean angle difference of 88°. The gradients were 0.65±0.08 mm/octave for periodotopy and 1.07 ± 0.16 mm/octave for tonotopy indicating that more cortical territory is devoted to the representation of an octave along the tonotopic than along the periodotopic gradient. Our results suggest that the fundamental importance of pitch, as evident in human perception, is also reflected in the layout of cortical maps and that the orthogonal spatial organization of frequency and periodicity might be a more general cortical organization principle.

  20. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition.

  1. 3D Shape-Encoded Particle Filter for Object Tracking and Its Application to Human Body Tracking

    OpenAIRE

    Chellappa, R; H. Moon

    2008-01-01

    Abstract We present a nonlinear state estimation approach using particle filters, for tracking objects whose approximate 3D shapes are known. The unnormalized conditional density for the solution to the nonlinear filtering problem leads to the Zakai equation, and is realized by the weights of the particles. The weight of a particle represents its geometric and temporal fit, which is computed bottom-up from the raw image using a shape-encoded filter. The main contribution of the paper is the d...

  2. Directed evolution of human heavy chain variable domain (VH) using in vivo protein fitness filter.

    Science.gov (United States)

    Kim, Dong-Sik; Song, Hyung-Nam; Nam, Hyo Jung; Kim, Sung-Geun; Park, Young-Seoub; Park, Jae-Chan; Woo, Eui-Jeon; Lim, Hyung-Kwon

    2014-01-01

    Human immunoglobulin heavy chain variable domains (VH) are promising scaffolds for antigen binding. However, VH is an unstable and aggregation-prone protein, hindering its use for therapeutic purposes. To evolve the VH domain, we performed in vivo protein solubility selection that linked antibiotic resistance to the protein folding quality control mechanism of the twin-arginine translocation pathway of E. coli. After screening a human germ-line VH library, 95% of the VH proteins obtained were identified as VH3 family members; one VH protein, MG2x1, stood out among separate clones expressing individual VH variants. With further screening of combinatorial framework mutation library of MG2x1, we found a consistent bias toward substitution with tryptophan at the position of 50 and 58 in VH. Comparison of the crystal structures of the VH variants revealed that those substitutions with bulky side chain amino acids filled the cavity in the VH interface between heavy and light chains of the Fab arrangement along with the increased number of hydrogen bonds, decreased solvation energy, and increased negative charge. Accordingly, the engineered VH acquires an increased level of thermodynamic stability, reversible folding, and soluble expression. The library built with the VH variant as a scaffold was qualified as most of VH clones selected randomly were expressed as soluble form in E. coli regardless length of the combinatorial CDR. Furthermore, a non-aggregation feature of the selected VH conferred a free of humoral response in mice, even when administered together with adjuvant. As a result, this selection provides an alternative directed evolution pathway for unstable proteins, which are distinct from conventional methods based on the phage display.

  3. Design of digital filters for frequency weightings (A and C) required for risk assessments of workers exposed to noise.

    Science.gov (United States)

    Rimell, Andrew N; Mansfield, Neil J; Paddan, Gurmail S

    2015-01-01

    Many workers are exposed to noise in their industrial environment. Excessive noise exposure can cause health problems and therefore it is important that the worker's noise exposure is assessed. This may require measurement by an equipment manufacturer or the employer. Human exposure to noise may be measured using microphones; however, weighting filters are required to correlate the physical noise sound pressure level measurements to the human's response to an auditory stimulus. IEC 61672-1 and ANSI S1.43 describe suitable weighting filters, but do not explain how to implement them for digitally recorded sound pressure level data. By using the bilinear transform, it is possible to transform the analogue equations given in the standards into digital filters. This paper describes the implementation of the weighting filters as digital IIR (Infinite Impulse Response) filters and provides all the necessary formulae to directly calculate the filter coefficients for any sampling frequency. Thus, the filters in the standards can be implemented in any numerical processing software (such as a spreadsheet or programming language running on a PC, mobile device or embedded system).

  4. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Science.gov (United States)

    Boyer, Eric O.; Babayan, Bénédicte M.; Bevilacqua, Frédéric; Noisternig, Markus; Warusfel, Olivier; Roby-Brami, Agnes; Hanneton, Sylvain; Viaud-Delmon, Isabelle

    2013-01-01

    Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed toward unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space. PMID:23626532

  5. From ear to hand: the role of the auditory-motor loop in pointing to an auditory source

    Directory of Open Access Journals (Sweden)

    Eric Olivier Boyer

    2013-04-01

    Full Text Available Studies of the nature of the neural mechanisms involved in goal-directed movements tend to concentrate on the role of vision. We present here an attempt to address the mechanisms whereby an auditory input is transformed into a motor command. The spatial and temporal organization of hand movements were studied in normal human subjects as they pointed towards unseen auditory targets located in a horizontal plane in front of them. Positions and movements of the hand were measured by a six infrared camera tracking system. In one condition, we assessed the role of auditory information about target position in correcting the trajectory of the hand. To accomplish this, the duration of the target presentation was varied. In another condition, subjects received continuous auditory feedback of their hand movement while pointing to the auditory targets. Online auditory control of the direction of pointing movements was assessed by evaluating how subjects reacted to shifts in heard hand position. Localization errors were exacerbated by short duration of target presentation but not modified by auditory feedback of hand position. Long duration of target presentation gave rise to a higher level of accuracy and was accompanied by early automatic head orienting movements consistently related to target direction. These results highlight the efficiency of auditory feedback processing in online motor control and suggest that the auditory system takes advantages of dynamic changes of the acoustic cues due to changes in head orientation in order to process online motor control. How to design an informative acoustic feedback needs to be carefully studied to demonstrate that auditory feedback of the hand could assist the monitoring of movements directed at objects in auditory space.

  6. Functional outcome of auditory implants in hearing loss.

    Science.gov (United States)

    Di Girolamo, S; Saccoccio, A; Giacomini, P G; Ottaviani, F

    2007-01-01

    The auditory implant provides a new mechanism for hearing when a hearing aid is not enough. It is the only medical technology able to functionally restore a human sense i.e. hearing. The auditory implant is very different from a hearing aid. Hearing aids amplify sound. Auditory implants compensate for damaged or non-working parts of the inner ear because they can directly stimulate the acoustic nerve. There are two principal types of auditory implant: the cochlear implant and the auditory brainstem implant. They have common basic characteristics, but different applications. A cochlear implant attempts to replace a function lost by the cochlea, usually due to an absence of functioning hair cells; the auditory brainstem implant (ABI) is a modification of the cochlear implant, in which the electrode array is placed directly into the brain when the acoustic nerve is not anymore able to carry the auditory signal. Different types of deaf or severely hearing-impaired patients choose auditory implants. Both children and adults can be candidates for implants. The best age for implantation is still being debated, but most children who receive implants are between 2 and 6 years old. Earlier implantation seems to perform better thanks to neural plasticity. The decision to receive an implant should involve a discussion with many medical specialists and an experienced surgeon.

  7. Simultanagnosia does not affect processes of auditory Gestalt perception.

    Science.gov (United States)

    Rennig, Johannes; Bleyer, Anna Lena; Karnath, Hans-Otto

    2017-05-01

    Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Auditory place theory and frequency difference limen

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jialu

    2006-01-01

    It has been a barrier that the place code is far too coarse a mechanism to account for the finest frequency difference limen for place theory of hearing since it was proposed in 19th century. A place correlation model, which takes the energy distribution of a pure tone in neighboring bands of auditory filters into full account, was presented in this paper. The model based on the place theory and some experimental results of the psychophysical tuning curves of hearing can explain the finest difference limen for frequency (about 0.02 or 0.3% at 1000 Hz)easily. Using a standard 1/3 octave filter bank of which the relationship between the frequency of a input pure tone apart from the centre frequency of K-th filter band, △f, and the output intensity difference between K-th and (K + 1)-th filters, △E, was established in order to show the fine frequency detection ability of the filter bank. This model can also be used to abstract the fundamental frequency of speech and to measure the frequency of pure tone precisely.

  9. Evidence of a visual-to-auditory cross-modal sensory gating phenomenon as reflected by the human P50 event-related brain potential modulation.

    Science.gov (United States)

    Lebib, Riadh; Papo, David; de Bode, Stella; Baudonnière, Pierre Marie

    2003-05-08

    We investigated the existence of a cross-modal sensory gating reflected by the modulation of an early electrophysiological index, the P50 component. We analyzed event-related brain potentials elicited by audiovisual speech stimuli manipulated along two dimensions: congruency and discriminability. The results showed that the P50 was attenuated when visual and auditory speech information were redundant (i.e. congruent), in comparison with this same event-related potential component elicited with discrepant audiovisual dubbing. When hard to discriminate, however, bimodal incongruent speech stimuli elicited a similar pattern of P50 attenuation. We concluded to the existence of a visual-to-auditory cross-modal sensory gating phenomenon. These results corroborate previous findings revealing a very early audiovisual interaction during speech perception. Finally, we postulated that the sensory gating system included a cross-modal dimension.

  10. [A comparison of time resolution among auditory, tactile and promontory electrical stimulation--superiority of cochlear implants as human communication aids].

    Science.gov (United States)

    Matsushima, J; Kumagai, M; Harada, C; Takahashi, K; Inuyama, Y; Ifukube, T

    1992-09-01

    Our previous reports showed that second formant information, using a speech coding method, could be transmitted through an electrode on the promontory. However, second formant information can also be transmitted by tactile stimulation. Therefore, to find out whether electrical stimulation of the auditory nerve would be superior to tactile stimulation for our speech coding method, the time resolutions of the two modes of stimulation were compared. The results showed that the time resolution of electrical promontory stimulation was three times better than the time resolution of tactile stimulation of the finger. This indicates that electrical stimulation of the auditory nerve is much better for our speech coding method than tactile stimulation of the finger.

  11. Optimal filtering

    CERN Document Server

    Anderson, Brian D O

    2005-01-01

    This graduate-level text augments and extends beyond undergraduate studies of signal processing, particularly in regard to communication systems and digital filtering theory. Vital for students in the fields of control and communications, its contents are also relevant to students in such diverse areas as statistics, economics, bioengineering, and operations research.Topics include filtering, linear systems, and estimation; the discrete-time Kalman filter; time-invariant filters; properties of Kalman filters; computational aspects; and smoothing of discrete-time signals. Additional subjects e

  12. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  13. Comparison between human and model observer performance in low-contrast detection tasks in CT images: application to images reconstructed with filtered back projection and iterative algorithms

    Science.gov (United States)

    Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H

    2014-01-01

    Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275

  14. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new si

  15. Virtual Auditory Displays

    Science.gov (United States)

    2000-01-01

    timbre , intensity, distance, room modeling, radio communication Virtual Environments Handbook Chapter 4 Virtual Auditory Displays Russell D... musical note “A” as a pure sinusoid, there will be 440 condensations and rarefactions per second. The distance between two adjacent condensations or...and complexity are pitch, loudness, and timbre respectively. This distinction between physical and perceptual measures of sound properties is an

  16. A loudspeaker-based room auralization system for auditory research

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    to systematically study the signal processing of realistic sounds by normal-hearing and hearing-impaired listeners, a flexible, reproducible and fully controllable auditory environment is needed. A loudspeaker-based room auralization (LoRA) system was developed in this thesis to provide virtual auditory...... environments (VAEs) with an array of loudspeakers. The LoRA system combines state-of-the-art acoustic room models with sound-field reproduction techniques. Limitations of these two techniques were taken into consideration together with the limitations of the human auditory system to localize sounds...

  17. The neglected neglect: auditory neglect.

    Science.gov (United States)

    Gokhale, Sankalp; Lahoti, Sourabh; Caplan, Louis R

    2013-08-01

    Whereas visual and somatosensory forms of neglect are commonly recognized by clinicians, auditory neglect is often not assessed and therefore neglected. The auditory cortical processing system can be functionally classified into 2 distinct pathways. These 2 distinct functional pathways deal with recognition of sound ("what" pathway) and the directional attributes of the sound ("where" pathway). Lesions of higher auditory pathways produce distinct clinical features. Clinical bedside evaluation of auditory neglect is often difficult because of coexisting neurological deficits and the binaural nature of auditory inputs. In addition, auditory neglect and auditory extinction may show varying degrees of overlap, which makes the assessment even harder. Shielding one ear from the other as well as separating the ear from space is therefore critical for accurate assessment of auditory neglect. This can be achieved by use of specialized auditory tests (dichotic tasks and sound localization tests) for accurate interpretation of deficits. Herein, we have reviewed auditory neglect with an emphasis on the functional anatomy, clinical evaluation, and basic principles of specialized auditory tests.

  18. Water-filtered infrared-A radiation (wIRA is not implicated in cellular degeneration of human skin

    Directory of Open Access Journals (Sweden)

    Applegate, Lee Ann

    2007-11-01

    Full Text Available Background: Excessive exposure to solar ultraviolet radiation is involved in the complex biologic process of cutaneous aging. Wavelengths in the ultraviolet-A and -B range (UV-A and UV-B have been shown to be responsible for the induction of proteases, e. g. the collagenase matrix metalloproteinase 1 (MMP-1, which are related to cell aging. As devices emitting longer wavelengths are widely used in therapeutic and cosmetic interventions and as the induction of MMP-1 by water-filtered infrared-A (wIRA had been discussed, it was of interest to assess effects of wIRA on the cellular and molecular level known to be possibly involved in cutaneous degeneration. Objectives: Investigation of the biological implications of widely used water-filtered infrared-A (wIRA radiators for clinical use on human skin fibroblasts assessed by MMP-1 gene expression (MMP-1 messenger ribonucleic acid (mRNA expression. Methods: Human skin fibroblasts were irradiated with approximately 88% wIRA (780-1400 nm and 12% red light (RL, 665-780 nm with 380 mW/cm² wIRA(+RL (333 mW/cm² wIRA on the one hand and for comparison with UV-A (330-400 nm, mainly UV-A1 and a small amount of blue light (BL, 400-450 nm with 28 mW/cm² UV-A(+BL on the other hand. Survival curves were established by colony forming ability after single exposures between 15 minutes and 8 hours to wIRA(+RL (340-10880 J/cm² wIRA(+RL, 300-9600 J/cm² wIRA or 15-45 minutes to UV-A(+BL (25-75 J/cm² UV-A(+BL. Both conventional Reverse Transcriptase Polymerase Chain Reaction (RT-PCR and quantitative real-time RT-PCR techniques were used to determine the induction of MMP-1 mRNA at two physiologic temperatures for skin fibroblasts (30°C and 37°C in single exposure regimens (15-60 minutes wIRA(+RL, 340-1360 J/cm² wIRA(+RL, 300-1200 J/cm² wIRA; 30 minutes UV-A(+BL, 50 J/cm² UV-A(+BL and in addition at 30°C in a repeated exposure protocol (up to 10 times 15 minutes wIRA(+RL with 340 J/cm² wIRA(+RL, 300 J/cm² w

  19. Auditory evoked potentials in postconcussive syndrome.

    Science.gov (United States)

    Drake, M E; Weate, S J; Newell, S A

    1996-12-01

    The neuropsychiatric sequelae of minor head trauma have been the source of controversy. Most clinical and imaging studies have shown no alteration after concussion, but neuropsychological and neuropathological abnormalities have been reported. Some changes in neurophysiologic diagnostic tests have been described in postconcussive syndrome. We recorded middle latency auditory evoked potentials (MLR) and slow vertex responses (SVR) in 20 individuals with prolonged cognitive difficulties, behavior changes, dizziness, and headache after concussion. MLR is utilized alternating polarity clicks presented monaurally at 70 dB SL at 4 per second, with 40 dB contralateral masking. Five hundred responses were recorded and replicated from Cz-A1 and Cz-A2, with 50 ms. analysis time and 20-1000 Hz filter band pass. SVRs were recorded with the same montage, but used rarefaction clicks, 0.5 Hz stimulus rate, 500 ms. analysis time, and 1-50 Hz filter band pass. Na and Pa MLR components were reduced in amplitude in postconcussion patients. Pa latency was significantly longer in patients than in controls. SVR amplitudes were longer in concussed individuals, but differences in latency and amplitude were not significant. These changes may reflect posttraumatic disturbance in presumed subcortical MLR generators, or in frontal or temporal cortical structures that modulate them. Middle and long-latency auditory evoked potentials may be helpful in the evaluation of postconcussive neuropsychiatric symptoms.

  20. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  1. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well.

  2. Neural Architecture of Auditory Object Categorization

    Directory of Open Access Journals (Sweden)

    Yune-Sang Lee

    2011-10-01

    Full Text Available We can identify objects by sight or by sound, yet far less is known about auditory object recognition than about visual recognition. Any exemplar of a dog (eg, a picture can be recognized on multiple categorical levels (eg, animal, dog, poodle. Using fMRI combined with machine-learning techniques, we studied these levels of categorization with sounds rather than images. Subjects heard sounds of various animate and inanimate objects, and unrecognizable control sounds. We report four primary findings: (1 some distinct brain regions selectively coded for basic (“dog” versus superordinate (“animal” categorization; (2 classification at the basic level entailed more extended cortical networks than those for superordinate categorization; (3 human voices were recognized far better by multiple brain regions than were any other sound categories; (4 regions beyond temporal lobe auditory areas were able to distinguish and categorize auditory objects. We conclude that multiple representations of an object exist at different categorical levels. This neural instantiation of object categories is distributed across multiple brain regions, including so-called “visual association areas,” indicating that these regions support object knowledge even when the input is auditory. Moreover, our findings appear to conflict with prior well-established theories of category-specific modules in the brain.

  3. Widespread occurrence of bisphenol A diglycidyl ethers, p-hydroxybenzoic acid esters (parabens), benzophenone type-UV filters, triclosan, and triclocarban in human urine from Athens, Greece.

    Science.gov (United States)

    Asimakopoulos, Alexandros G; Thomaidis, Nikolaos S; Kannan, Kurunthachalam

    2014-02-01

    Biomonitoring of human exposure to bisphenol A diglycidyl ethers (BADGEs; resin coating for food cans), p-hydroxybenzoic acid esters (parabens; preservatives), benzophenone-type UV filters (BP-UV filters; sunscreen agents), triclosan (TCS; antimicrobials), and triclocarban (TCC; antimicrobials) has been investigated in western European countries and North America. Nevertheless, little is known about the exposure of Greek populations to these environmental chemicals. In this study, 100 urine samples collected from Athens, Greece, were analyzed by liquid chromatography-tandem mass spectrometry (HPLC-MS/MS) for the determination of total concentrations of five derivatives of BADGEs, six parabens and their metabolite (ethyl-protocatechuate), five derivatives of BP-UV filters, TCS, and TCC. Urinary concentrations of BADGEs, parabens, ethyl-protocatechuate, BP-UV filters, TCS and TCC (on a volume basis) ranged 0.3-20.9 (geometric mean: 0.9), 1.6-1010 (24.2), paraben (100%), bisphenol A bis (2,3-dihydroxypropyl) ether (90%), ethyl paraben (87%), 2,4-dihydroxybenzophenone (78%), propyl paraben (72%), and TCS (71%). Estimated daily intakes (EDIurine), calculated on the basis of the measured urinary concentrations, ranged from 0.023 μg/kg bw/day for Σ5BADGEs to 31.4 μg/kg bw/day for Σ6Parabens.

  4. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described.

  5. Modeling auditory evoked potentials to complex stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch

    The auditory evoked potential (AEP) is an electrical signal that can be recorded from electrodes attached to the scalp of a human subject when a sound is presented. The signal is considered to reflect neural activity in response to the acoustic stimulation and is a well established clinical...... clinically and in research towards using realistic and complex stimuli, such as speech, to electrophysiologically assess the human hearing. However, to interpret the AEP generation to complex sounds, the potential patterns in response to simple stimuli needs to be understood. Therefore, the model was used...... to simulate auditory brainstem responses (ABRs) evoked by classic stimuli like clicks, tone bursts and chirps. The ABRs to these simple stimuli were compared to literature data and the model was shown to predict the frequency dependence of tone-burst ABR wave-V latency and the level-dependence of ABR wave...

  6. A corollary discharge maintains auditory sensitivity during sound production.

    Science.gov (United States)

    Poulet, James F A; Hedwig, Berthold

    2002-08-22

    Speaking and singing present the auditory system of the caller with two fundamental problems: discriminating between self-generated and external auditory signals and preventing desensitization. In humans and many other vertebrates, auditory neurons in the brain are inhibited during vocalization but little is known about the nature of the inhibition. Here we show, using intracellular recordings of auditory neurons in the singing cricket, that presynaptic inhibition of auditory afferents and postsynaptic inhibition of an identified auditory interneuron occur in phase with the song pattern. Presynaptic and postsynaptic inhibition persist in a fictively singing, isolated cricket central nervous system and are therefore the result of a corollary discharge from the singing motor network. Mimicking inhibition in the interneuron by injecting hyperpolarizing current suppresses its spiking response to a 100-dB sound pressure level (SPL) acoustic stimulus and maintains its response to subsequent, quieter stimuli. Inhibition by the corollary discharge reduces the neural response to self-generated sound and protects the cricket's auditory pathway from self-induced desensitization.

  7. Integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2013-01-01

    Musicians often say that they not only hear but also "feel" music. To explore the contribution of tactile information to "feeling" music, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter-recognition task. Subjects discriminated between two types of sequences, "duple" (march-like rhythms) and "triple" (waltz-like rhythms), presented in three conditions: (1) unimodal inputs (auditory or tactile alone); (2) various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts; and (3) bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70-85 %) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70-90 %) when all of the metrically important notes are assigned to one channel and is reduced to 60 % when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90 %). Performance dropped dramatically when subjects were presented with incongruent auditory cues (10 %), as opposed to incongruent tactile cues (60 %), demonstrating that auditory input dominates meter perception. These observations support the notion that meter perception is a cross-modal percept with tactile inputs underlying the perception of "feeling" music.

  8. Application of Savitzky-Golay differentiation filters and Fourier functions to simultaneous determination of cefepime and the co-administered drug, levofloxacin, in spiked human plasma.

    Science.gov (United States)

    Abdel-Aziz, Omar; Abdel-Ghany, Maha F; Nagi, Reham; Abdel-Fattah, Laila

    2015-03-15

    The present work is concerned with simultaneous determination of cefepime (CEF) and the co-administered drug, levofloxacin (LEV), in spiked human plasma by applying a new approach, Savitzky-Golay differentiation filters, and combined trigonometric Fourier functions to their ratio spectra. The different parameters associated with the calculation of Savitzky-Golay and Fourier coefficients were optimized. The proposed methods were validated and applied for determination of the two drugs in laboratory prepared mixtures and spiked human plasma. The results were statistically compared with reported HPLC methods and were found accurate and precise.

  9. Reorganisation of the right occipito-parietal stream for auditory spatial processing in early blind humans. A transcranial magnetic stimulation study.

    Science.gov (United States)

    Collignon, O; Davare, M; Olivier, E; De Volder, A G

    2009-05-01

    It is well known that, following an early visual deprivation, the neural network involved in processing auditory spatial information undergoes a profound reorganization. In particular, several studies have demonstrated an extensive activation of occipital brain areas, usually regarded as essentially "visual", when early blind subjects (EB) performed a task that requires spatial processing of sounds. However, little is known about the possible consequences of the activation of occipitals area on the function of the large cortical network known, in sighted subjects, to be involved in the processing of auditory spatial information. To address this issue, we used event-related transcranial magnetic stimulation (TMS) to induce virtual lesions of either the right intra-parietal sulcus (rIPS) or the right dorsal extrastriate occipital cortex (rOC) at different delays in EB subjects performing a sound lateralization task. Surprisingly, TMS applied over rIPS, a region critically involved in the spatial processing of sound in sighted subjects, had no influence on the task performance in EB. In contrast, TMS applied over rOC 50 ms after sound onset, disrupted the spatial processing of sounds originating from the contralateral hemifield. The present study shed new lights on the reorganisation of the cortical network dedicated to the spatial processing of sounds in EB by showing an early contribution of rOC and a lesser involvement of rIPS.

  10. Ultraviolet filters.

    Science.gov (United States)

    Shaath, Nadim A

    2010-04-01

    The chemistry, photostability and mechanism of action of ultraviolet filters are reviewed. The worldwide regulatory status of the 55 approved ultraviolet filters and their optical properties are documented. The photostabilty of butyl methoxydibenzoyl methane (avobenzone) is considered and methods to stabilize it in cosmetic formulations are presented.

  11. A programmable acoustic stimuli and auditory evoked potential measurement system for objective tinnitus diagnosis research.

    Science.gov (United States)

    Ku, Yunseo; Ahn, Joong Woo; Kwon, Chiheon; Suh, Myung-Whan; Lee, Jun Ho; Oh, Seung Ha; Kim, Hee Chan

    2014-01-01

    This paper presents the development of a single platform that records auditory evoked potential synchronized to specific acoustic stimuli of the gap prepulse inhibition method for objective tinnitus diagnosis research. The developed system enables to program various parameters of the generated acoustic stimuli. Moreover, only by simple filter modification, the developed system provides high flexibility to record not only short latency auditory brainstem response but also late latency auditory cortical response. The adaptive weighted averaging algorithm to minimize the time required for the experiment is also introduced. The results show that the proposed algorithm can reduce the number of the averaging repetitions to 70% compared with conventional ensemble averaging method.

  12. Assessment of parabens and ultraviolet filters in human placenta tissue by ultrasound-assisted extraction and ultra-high performance liquid chromatography-tandem mass spectrometry.

    Science.gov (United States)

    Vela-Soria, F; Gallardo-Torres, M E; Ballesteros, O; Díaz, C; Pérez, J; Navalón, A; Fernández, M F; Olea, N

    2017-03-03

    Increasing concerns have been raised over recent decades about human exposure to Endocrine Disrupting Chemicals (EDCs), especially about their possible effects on embryo, foetus, newborn, and child. Parabens (PBs) and ultraviolet filters (UV-filters) are prevalent EDCs widely used as additives in cosmetics and personal care products (PCPs). The objective of this study was to determine the presence of four PBs and ten UV-filters in placental tissue samples using a novel analytical method based on ultrasound-assisted extraction (UAE) and ultra-high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS). Multivariate optimization strategies were used to accurately optimize extraction and clean-up parameters. Limits of quantification ranged from 0.15 to 0.5μgkg(-1), and inter-day variability (evaluated as relative standard deviation) ranged from 3.6% to 14%. The method was validated using matrix-matched standard calibration followed by a recovery assay with spiked samples. Recovery percents ranged from 94.5% to 112%. The method was satisfactorily applied for the determination of the target compounds in human placental tissue samples collected at delivery from 15 randomly selected women. This new analytical procedure can provide information on foetal exposure to compounds, which has been little studied.

  13. Performance of an N95 filtering facepiece particulate respirator and a surgical mask during human breathing: two pathways for particle penetration.

    Science.gov (United States)

    Grinshpun, Sergey A; Haruta, Hiroki; Eninger, Robert M; Reponen, Tiina; McKay, Roy T; Lee, Shu-An

    2009-10-01

    The protection level offered by filtering facepiece particulate respirators and face masks is defined by the percentage of ambient particles penetrating inside the protection device. There are two penetration pathways: (1) through the faceseal leakage, and the (2) filter medium. This study aimed at differentiating the contributions of these two pathways for particles in the size range of 0.03-1 microm under actual breathing conditions. One N95 filtering facepiece respirator and one surgical mask commonly used in health care environments were tested on 25 subjects (matching the latest National Institute for Occupational Safety and Health fit testing panel) as the subjects performed conventional fit test exercises. The respirator and the mask were also tested with breathing manikins that precisely mimicked the prerecorded breathing patterns of the tested subjects. The penetration data obtained in the human subject- and manikin-based tests were compared for different particle sizes and breathing patterns. Overall, 5250 particle size- and exercise-specific penetration values were determined. For each value, the faceseal leakage-to-filter ratio was calculated to quantify the relative contributions of the two penetration pathways. The number of particles penetrating through the faceseal leakage of the tested respirator/mask far exceeded the number of those penetrating through the filter medium. For the N95 respirator, the excess was (on average) by an order of magnitude and significantly increased with an increase in particle size (p < 0.001): approximately 7-fold greater for 0.04 microm, approximately 10-fold for 0.1 microm, and approximately 20-fold for 1 microm. For the surgical mask, the faceseal leakage-to-filter ratio ranged from 4.8 to 5.8 and was not significantly affected by the particle size for the tested submicrometer fraction. Facial/body movement had a pronounced effect on the relative contribution of the two penetration pathways. Breathing intensity and

  14. Spontaneous synchronized tapping to an auditory rhythm in a chimpanzee.

    Science.gov (United States)

    Hattori, Yuko; Tomonaga, Masaki; Matsuzawa, Tetsuro

    2013-01-01

    Humans actively use behavioral synchrony such as dancing and singing when they intend to make affiliative relationships. Such advanced synchronous movement occurs even unconsciously when we hear rhythmically complex music. A foundation for this tendency may be an evolutionary adaptation for group living but evolutionary origins of human synchronous activity is unclear. Here we show the first evidence that a member of our closest living relatives, a chimpanzee, spontaneously synchronizes her movement with an auditory rhythm: After a training to tap illuminated keys on an electric keyboard, one chimpanzee spontaneously aligned her tapping with the sound when she heard an isochronous distractor sound. This result indicates that sensitivity to, and tendency toward synchronous movement with an auditory rhythm exist in chimpanzees, although humans may have expanded it to unique forms of auditory and visual communication during the course of human evolution.

  15. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex.

  16. Behind the Scenes of Auditory Perception

    OpenAIRE

    Shamma, Shihab A.; Micheyl, Christophe

    2010-01-01

    Auditory scenes” often contain contributions from multiple acoustic sources. These are usually heard as separate auditory “streams”, which can be selectively followed over time. How and where these auditory streams are formed in the auditory system is one of the most fascinating questions facing auditory scientists today. Findings published within the last two years indicate that both cortical and sub-cortical processes contribute to the formation of auditory streams, and they raise importan...

  17. Visual change detection recruits auditory cortices in early deafness.

    Science.gov (United States)

    Bottari, Davide; Heimler, Benedetta; Caclin, Anne; Dalmolin, Anna; Giard, Marie-Hélène; Pavani, Francesco

    2014-07-01

    Although cross-modal recruitment of early sensory areas in deafness and blindness is well established, the constraints and limits of these plastic changes remain to be understood. In the case of human deafness, for instance, it is known that visual, tactile or visuo-tactile stimuli can elicit a response within the auditory cortices. Nonetheless, both the timing of these evoked responses and the functional contribution of cross-modally recruited areas remain to be ascertained. In the present study, we examined to what extent auditory cortices of deaf humans participate in high-order visual processes, such as visual change detection. By measuring visual ERPs, in particular the visual MisMatch Negativity (vMMN), and performing source localization, we show that individuals with early deafness (N=12) recruit the auditory cortices when a change in motion direction during shape deformation occurs in a continuous visual motion stream. Remarkably this "auditory" response for visual events emerged with the same timing as the visual MMN in hearing controls (N=12), between 150 and 300 ms after the visual change. Furthermore, the recruitment of auditory cortices for visual change detection in early deaf was paired with a reduction of response within the visual system, indicating a shift from visual to auditory cortices of part of the computational process. The present study suggests that the deafened auditory cortices participate at extracting and storing the visual information and at comparing on-line the upcoming visual events, thus indicating that cross-modally recruited auditory cortices can reach this level of computation.

  18. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular mec

  19. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  20. Food Filter

    Institute of Scientific and Technical Information of China (English)

    履之

    1995-01-01

    A typical food-processing plant produces about 500,000 gallons of waste water daily. Laden with organic compounds, this water usually is evaporated or discharged into sewers.A better solution is to filter the water through

  1. Neurophysiological mechanisms involved in auditory perceptual organization

    Directory of Open Access Journals (Sweden)

    Aurélie Bidet-Caulet

    2009-09-01

    Full Text Available In our complex acoustic environment, we are confronted with a mixture of sounds produced by several simultaneous sources. However, we rarely perceive these sounds as incomprehensible noise. Our brain uses perceptual organization processes to independently follow the emission of each sound source over time. If the acoustic properties exploited in these processes are well-established, the neurophysiological mechanisms involved in auditory scene analysis have raised interest only recently. Here, we review the studies investigating these mechanisms using electrophysiological recordings from the cochlear nucleus to the auditory cortex, in animals and humans. Their findings reveal that basic mechanisms such as frequency selectivity, forward suppression and multi-second habituation shape the automatic brain responses to sounds in a way that can account for several important characteristics of perceptual organization of both simultaneous and successive sounds. One challenging question remains unresolved: how are the resulting activity patterns integrated to yield the corresponding conscious perceptsµ

  2. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  3. Behavioral estimates of human frequency selectivity at low frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado

    A fundamental property of our hearing organ is its ability to break down sound into different spectral components, allowing us to make use of the richness in natural sound phenomena. Auditory filters, which conceptualize this property of the ear, however, have not been appropriately described...... at low sound frequencies. As a consequence of our lack of knowledge, we cannot accurately model our perception of complex low-frequency sound (such as that emitted by wind turbines or industrial processes, which can easily produce annoyance) nor make meaningful predictions of our perception based...... on physical sound measurements. In this PhD thesis a detailed description of frequency selectivity at low frequencies is given. Different experiments have been performed to determine the properties of human auditory filters. Besides, loudness perception of low-frequency sinusoidal signals has been evaluated...

  4. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  5. Tcf4 transgenic female mice display delayed adaptation in an auditory latent inhibition paradigm.

    Science.gov (United States)

    Brzózka, M M; Rossner, M J; de Hoz, L

    2016-09-01

    Schizophrenia (SZ) is a severe mental disorder affecting about 1 % of the human population. Patients show severe deficits in cognitive processing often characterized by an improper filtering of environmental stimuli. Independent genome-wide association studies confirmed a number of risk variants for SZ including several associated with the gene encoding the transcription factor 4 (TCF4). TCF4 is widely expressed in the central nervous system of mice and humans and seems to be important for brain development. Transgenic mice overexpressing murine Tcf4 (Tcf4tg) in the adult brain display cognitive impairments and sensorimotor gating disturbances. To address the question of whether increased Tcf4 gene dosage may affect cognitive flexibility in an auditory associative task, we tested latent inhibition (LI) in female Tcf4tg mice. LI is a widely accepted translational endophenotype of SZ and results from a maladaptive delay in switching a response to a previously unconditioned stimulus when this becomes conditioned. Using an Audiobox, we pre-exposed Tcf4tg mice and their wild-type littermates to either a 3- or a 12-kHz tone before conditioning them to a 12-kHz tone. Tcf4tg animals pre-exposed to a 12-kHz tone showed significantly delayed conditioning when the previously unconditioned tone became associated with an air puff. These results support findings that associate TCF4 dysfunction with cognitive inflexibility and improper filtering of sensory stimuli observed in SZ patients.

  6. 基于听觉掩蔽效应的多频带谱减语音增强方法%Multi-band spectral subtraction method for speech enhancement based on masking property of human auditory system

    Institute of Scientific and Technical Information of China (English)

    曹亮; 张天骐; 高洪兴; 易琛

    2013-01-01

    In order to reduce the music noise introduced by conventional spectral subtraction method for speech enhancement, a speech enhancement algorithm is put forward based on the combination of multi-band spectral subtraction and the masking properties of human auditory system. Firstly, the weighted recursive averaging method is used to estimate the noise power spectrum, exert subtraction of multi-band on the noise-corrupted speech signal; then, auditory masking threshold is computed using the estimated speech signal, adjust the subtraction factor according to the threshold masking; finally, we obtain the spectrum of enhanced speech through computing the gain faction according to the subtraction factor. The simulation shows that, at low SNR, compared with conventional spectral subtraction, background noise and residual music noise are effectively suppressed, and the clarity and intelligibility of speech signal are dramatically improved.%为了减小传统谱减法引入的音乐噪声,提出了一种将多频带谱减和听觉掩蔽效应相结合的语音增强算法.用加权递归平滑的方法估计噪声的功率谱,对带噪的语音信号进行多频带谱减,计算听觉掩蔽阈值,再根据掩蔽阈值动态地调节谱减因子,通过增益函数得到增强后语音信号的频谱.仿真实验结果表明,与传统的谱减法相比,该算法在信噪比较低情况下,背景噪声和残余噪声得到了有效的抑制,语音信号的清晰度和可懂度也有了明显提升.

  7. Acoustic Noise of MRI Scans of the Internal Auditory Canal and Potential for Intracochlear Physiological Changes

    CERN Document Server

    Busada, M A; Ibrahim, G; Huckans, J H

    2012-01-01

    Magnetic resonance imaging (MRI) is a widely used medical imaging technique to assess the health of the auditory (vestibulocochlear) nerve. A well known problem with MRI machines is that the acoustic noise they generate during a scan can cause auditory temporary threshold shifts (TTS) in humans. In addition, studies have shown that excessive noise in general can cause rapid physiological changes of constituents of the auditory within the cochlea. Here, we report in-situ measurements of the acoustic noise from a 1.5 Tesla MRI machine (GE Signa) during scans specific to auditory nerve assessment. The measured average and maximum noise levels corroborate earlier investigations where TTS occurred. We briefly discuss the potential for physiological changes to the intracochlear branches of the auditory nerve as well as iatrogenic misdiagnoses of intralabyrinthine and intracochlear schwannomas due to hypertrophe of the auditory nerve within the cochlea during MRI assessment.

  8. Temporal sequence of visuo-auditory interaction in multiple areas of the guinea pig visual cortex.

    Directory of Open Access Journals (Sweden)

    Masataka Nishimura

    Full Text Available Recent studies in humans and monkeys have reported that acoustic stimulation influences visual responses in the primary visual cortex (V1. Such influences can be generated in V1, either by direct auditory projections or by feedback projections from extrastriate cortices. To test these hypotheses, cortical activities were recorded using optical imaging at a high spatiotemporal resolution from multiple areas of the guinea pig visual cortex, to visual and/or acoustic stimulations. Visuo-auditory interactions were evaluated according to differences between responses evoked by combined auditory and visual stimulation, and the sum of responses evoked by separate visual and auditory stimulations. Simultaneous presentation of visual and acoustic stimulations resulted in significant interactions in V1, which occurred earlier than in other visual areas. When acoustic stimulation preceded visual stimulation, significant visuo-auditory interactions were detected only in V1. These results suggest that V1 is a cortical origin of visuo-auditory interaction.

  9. Selective attention in an insect auditory neuron.

    Science.gov (United States)

    Pollack, G S

    1988-07-01

    Previous work (Pollack, 1986) showed that an identified auditory neuron of crickets, the omega neuron, selectively encodes the temporal structure of an ipsilateral sound stimulus when a contralateral stimulus is presented simultaneously, even though the contralateral stimulus is clearly encoded when it is presented alone. The present paper investigates the physiological basis for this selective response. The selectivity for the ipsilateral stimulus is a result of the apparent intensity difference of ipsi- and contralateral stimuli, which is imposed by auditory directionality; when simultaneous presentation of stimuli from the 2 sides is mimicked by presenting low- and high-intensity stimuli simultaneously from the ipsilateral side, the neuron responds selectively to the high-intensity stimulus, even though the low-intensity stimulus is effective when it is presented alone. The selective encoding of the more intense (= ipsilateral) stimulus is due to intensity-dependent inhibition, which is superimposed on the cell's excitatory response to sound. Because of the inhibition, the stimulus with lower intensity (i.e., the contralateral stimulus) is rendered subthreshold, while the stimulus with higher intensity (the ipsilateral stimulus) remains above threshold. Consequently, the temporal structure of the low-intensity stimulus is filtered out of the neuron's spike train. The source of the inhibition is not known. It is not a consequence of activation of the omega neuron. Its characteristics are not consistent with those of known inhibitory inputs to the omega neuron.

  10. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  11. Left hemispheric dominance during auditory processing in a noisy environment

    Directory of Open Access Journals (Sweden)

    Ross Bernhard

    2007-11-01

    Full Text Available Abstract Background In daily life, we are exposed to different sound inputs simultaneously. During neural encoding in the auditory pathway, neural activities elicited by these different sounds interact with each other. In the present study, we investigated neural interactions elicited by masker and amplitude-modulated test stimulus in primary and non-primary human auditory cortex during ipsi-lateral and contra-lateral masking by means of magnetoencephalography (MEG. Results We observed significant decrements of auditory evoked responses and a significant inter-hemispheric difference for the N1m response during both ipsi- and contra-lateral masking. Conclusion The decrements of auditory evoked neural activities during simultaneous masking can be explained by neural interactions evoked by masker and test stimulus in peripheral and central auditory systems. The inter-hemispheric differences of N1m decrements during ipsi- and contra-lateral masking reflect a basic hemispheric specialization contributing to the processing of complex auditory stimuli such as speech signals in noisy environments.

  12. A computer model of auditory stream segregation.

    Science.gov (United States)

    Beauvois, M W; Meddis, R

    1991-08-01

    A computer model is described which simulates some aspects of auditory stream segregation. The model emphasizes the explanatory power of simple physiological principles operating at a peripheral rather than a central level. The model consists of a multi-channel bandpass-filter bank with a "noisy" output and an attentional mechanism that responds selectively to the channel with the greatest activity. A "leaky integration" principle allows channel excitation to accumulate and dissipate over time. The model produces similar results to two experimental demonstrations of streaming phenomena, which are presented in detail. These results are discussed in terms of the "emergent properties" of a system governed by simple physiological principles. As such the model is contrasted with higher-level Gestalt explanations of the same phenomena while accepting that they may constitute complementary kinds of explanation.

  13. Auditory Cortex Characteristics in Schizophrenia: Associations With Auditory Hallucinations.

    Science.gov (United States)

    Mørch-Johnsen, Lynn; Nesvåg, Ragnar; Jørgensen, Kjetil N; Lange, Elisabeth H; Hartberg, Cecilie B; Haukvik, Unn K; Kompus, Kristiina; Westerhausen, René; Osnes, Kåre; Andreassen, Ole A; Melle, Ingrid; Hugdahl, Kenneth; Agartz, Ingrid

    2017-01-01

    Neuroimaging studies have demonstrated associations between smaller auditory cortex volume and auditory hallucinations (AH) in schizophrenia. Reduced cortical volume can result from a reduction of either cortical thickness or cortical surface area, which may reflect different neuropathology. We investigate for the first time how thickness and surface area of the auditory cortex relate to AH in a large sample of schizophrenia spectrum patients. Schizophrenia spectrum (n = 194) patients underwent magnetic resonance imaging. Mean cortical thickness and surface area in auditory cortex regions (Heschl's gyrus [HG], planum temporale [PT], and superior temporal gyrus [STG]) were compared between patients with (AH+, n = 145) and without (AH-, n = 49) a lifetime history of AH and 279 healthy controls. AH+ patients showed significantly thinner cortex in the left HG compared to AH- patients (d = 0.43, P = .0096). There were no significant differences between AH+ and AH- patients in cortical thickness in the PT or STG, or in auditory cortex surface area in any of the regions investigated. Group differences in cortical thickness in the left HG was not affected by duration of illness or current antipsychotic medication. AH in schizophrenia patients were related to thinner cortex, but not smaller surface area of the left HG, a region which includes the primary auditory cortex. The results support that structural abnormalities of the auditory cortex underlie AH in schizophrenia. © The Author 2016. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  14. Contextual modulation of primary visual cortex by auditory signals.

    Science.gov (United States)

    Petro, L S; Paton, A T; Muckli, L

    2017-02-19

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195-201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256-1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame.This article is part of the themed issue 'Auditory and visual scene analysis'.

  15. Contextual modulation of primary visual cortex by auditory signals

    Science.gov (United States)

    Paton, A. T.

    2017-01-01

    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044015

  16. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  17. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual integra

  18. Temporal pattern recognition based on instantaneous spike rate coding in a simple auditory system.

    Science.gov (United States)

    Nabatiyan, A; Poulet, J F A; de Polavieja, G G; Hedwig, B

    2003-10-01

    Auditory pattern recognition by the CNS is a fundamental process in acoustic communication. Because crickets communicate with stereotyped patterns of constant frequency syllables, they are established models to investigate the neuronal mechanisms of auditory pattern recognition. Here we provide evidence that for the neural processing of amplitude-modulated sounds, the instantaneous spike rate rather than the time-averaged neural activity is the appropriate coding principle by comparing both coding parameters in a thoracic interneuron (Omega neuron ON1) of the cricket (Gryllus bimaculatus) auditory system. When stimulated with different temporal sound patterns, the analysis of the instantaneous spike rate demonstrates that the neuron acts as a low-pass filter for syllable patterns. The instantaneous spike rate is low at high syllable rates, but prominent peaks in the instantaneous spike rate are generated as the syllable rate resembles that of the species-specific pattern. The occurrence and repetition rate of these peaks in the neuronal discharge are sufficient to explain temporal filtering in the cricket auditory pathway as they closely match the tuning of phonotactic behavior to different sound patterns. Thus temporal filtering or "pattern recognition" occurs at an early stage in the auditory pathway.

  19. Seroepidemiological study of human cysticercosis with blood samples collected on filter paper, in Lages, State of Santa Catarina, Brazil, 2004-2005

    Directory of Open Access Journals (Sweden)

    Maria Márcia Imenes Ishida

    2011-06-01

    Full Text Available INTRODUCTION: Human serofrequency of antibodies against Taenia solium antigens was determined and risk factors for cysticercosis transmission were identified. METHODS: Individuals (n=878 from periurban and rural locations of Lages, SC, were interviewed to gather demographic, sanitary and health information. Interviews and blood sample collections by finger prick on Whatman filter paper were performed from August 2004 to May 2005. Observation determined that 850 samples were suitable for analysis and were tested by ELISA using vesicular fluid of Taenia crassiceps heterologous antigen. To ensure the reliability of the results, 77 samples of the dried blood were matched with sera. The reactive samples were submitted to a serum confirmatory immunoblot (IB test using purified Taenia crassiceps glycoproteins. RESULTS: The ELISA results for the dried blood and serum samples were statistically consistent. ELISA was positive in 186 (21.9% out of 850 individuals. A group of 213 individuals were asked to collect vein blood for IB (186 with positive result in ELISA and 27 with inappropriate whole blood samples and 130 attended the request. The IB was positive in 29 (3.4% out of 850 individuals. A significant correlation (p = 0.0364 was determined among individuals who tested positive in the IB assay who practiced both pig rearing and kitchen gardening. CONCLUSIONS: ELISA with dried blood eluted from filter paper was suitable for cysticercosis population surveys. In Lages, human infection was associated with pig rearing and kitchen gardening. The prevalence index was compatible with other Latin American endemic areas.

  20. O Integrality of Perceptual Attributes Determined by Sound Source and Filter.

    Science.gov (United States)

    Li, Xiaofeng

    The production of a complex sound can be viewed as a sequential operation of sound production components: Power excites a sound source that produces an original sound, which is shaped by a certain filter response function. It is hypothesized that human listeners evaluate sound quality by decomposing the complex spectrum of the sound according to this production model. The current research investigates the perceptual relationship among spectral attributes induced by the sound source and the filter. Specifically, the capability of listeners to extract a global spectral attribute of spectral slope determined by a sound source is examined in the context of variation with other two background attributes: fundamental frequency (another source attribute) and ripple frequency (a filter characteristic). Spectral slope is judged to be integral with these background attributes if a significant decrement in slope discrimination occurs due to a varied, relative to fixed, background attribute. The five experiments used a XAB task with roving overall spectral intensity within trials to eliminate intensity cues. The presence of a significant decrement due to roving fundamental frequency indicates that spectral slope is integral with fundamental frequency. In contrast, a strikingly smaller decrement occurs as the filter characteristic is varied, suggesting that spectral slope is more easily separable from the ripple filter attribute than from fundamental frequency. Therefore, it is conjectured that the global source attribute of spectral slope is perceptually unitized with other source attributes, and listeners treat source attributes as an entity in describing the characteristics of the sound source. However, such an evaluation of sound source attributes is relatively orthogonal to the actual spectral envelope, which may be shaped by different filter functions. The current study have extended profile analysis and demonstrated human auditory capability to resolve a global spectral

  1. Generalised Filtering

    Directory of Open Access Journals (Sweden)

    Karl Friston

    2010-01-01

    Full Text Available We describe a Bayesian filtering scheme for nonlinear state-space models in continuous time. This scheme is called Generalised Filtering and furnishes posterior (conditional densities on hidden states and unknown parameters generating observed data. Crucially, the scheme operates online, assimilating data to optimize the conditional density on time-varying states and time-invariant parameters. In contrast to Kalman and Particle smoothing, Generalised Filtering does not require a backwards pass. In contrast to variational schemes, it does not assume conditional independence between the states and parameters. Generalised Filtering optimises the conditional density with respect to a free-energy bound on the model's log-evidence. This optimisation uses the generalised motion of hidden states and parameters, under the prior assumption that the motion of the parameters is small. We describe the scheme, present comparative evaluations with a fixed-form variational version, and conclude with an illustrative application to a nonlinear state-space model of brain imaging time-series.

  2. Notch filter

    Science.gov (United States)

    Shelton, G. B. (Inventor)

    1977-01-01

    A notch filter for the selective attenuation of a narrow band of frequencies out of a larger band was developed. A helical resonator is connected to an input circuit and an output circuit through discrete and equal capacitors, and a resistor is connected between the input and the output circuits.

  3. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    Science.gov (United States)

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  4. Auditory Neuropathy - A Case of Auditory Neuropathy after Hyperbilirubinemia

    Directory of Open Access Journals (Sweden)

    Maliheh Mazaher Yazdi

    2007-12-01

    Full Text Available Background and Aim: Auditory neuropathy is an hearing disorder in which peripheral hearing is normal, but the eighth nerve and brainstem are abnormal. By clinical definition, patient with this disorder have normal OAE, but exhibit an absent or severely abnormal ABR. Auditory neuropathy was first reported in the late 1970s as different methods could identify discrepancy between absent ABR and present hearing threshold. Speech understanding difficulties are worse than can be predicted from other tests of hearing function. Auditory neuropathy may also affect vestibular function. Case Report: This article presents electrophysiological and behavioral data from a case of auditory neuropathy in a child with normal hearing after bilirubinemia in a 5 years follow-up. Audiological findings demonstrate remarkable changes after multidisciplinary rehabilitation. Conclusion: auditory neuropathy may involve damage to the inner hair cells-specialized sensory cells in the inner ear that transmit information about sound through the nervous system to the brain. Other causes may include faulty connections between the inner hair cells and the nerve leading from the inner ear to the brain or damage to the nerve itself. People with auditory neuropathy have OAEs response but absent ABR and hearing loss threshold that can be permanent, get worse or get better.

  5. Modeling auditory processing and speech perception in hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve

    A better understanding of how the human auditory system represents and analyzes sounds and how hearing impairment affects such processing is of great interest for researchers in the fields of auditory neuroscience, audiology, and speech communication as well as for applications in hearing......-instrument and speech technology. In this thesis, the primary focus was on the development and evaluation of a computational model of human auditory signal-processing and perception. The model was initially designed to simulate the normal-hearing auditory system with particular focus on the nonlinear processing...... aimed at experimentally characterizing the effects of cochlear damage on listeners' auditory processing, in terms of sensitivity loss and reduced temporal and spectral resolution. The results showed that listeners with comparable audiograms can have very different estimated cochlear input...

  6. Impairments in musical abilities reflected in the auditory brainstem: evidence from congenital amusia.

    Science.gov (United States)

    Lehmann, Alexandre; Skoe, Erika; Moreau, Patricia; Peretz, Isabelle; Kraus, Nina

    2015-07-01

    Congenital amusia is a neurogenetic condition, characterized by a deficit in music perception and production, not explained by hearing loss, brain damage or lack of exposure to music. Despite inferior musical performance, amusics exhibit normal auditory cortical responses, with abnormal neural correlates suggested to lie beyond auditory cortices. Here we show, using auditory brainstem responses to complex sounds in humans, that fine-grained automatic processing of sounds is impoverished in amusia. Compared with matched non-musician controls, spectral amplitude was decreased in amusics for higher harmonic components of the auditory brainstem response. We also found a delayed response to the early transient aspects of the auditory stimulus in amusics. Neural measures of spectral amplitude and response timing correlated with participants' behavioral assessments of music processing. We demonstrate, for the first time, that amusia affects how complex acoustic signals are processed in the auditory brainstem. This neural signature of amusia mirrors what is observed in musicians, such that the aspects of the auditory brainstem responses that are enhanced in musicians are degraded in amusics. By showing that gradients of music abilities are reflected in the auditory brainstem, our findings have implications not only for current models of amusia but also for auditory functioning in general.

  7. Use of auditory learning to manage listening problems in children

    National Research Council Canada - National Science Library

    David R Moore; Lorna F Halliday; Sygal Amitay

    2009-01-01

    .... It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications...

  8. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... CAPD often have trouble maintaining attention, although health, motivation, and attitude also can play a role. Auditory ... programs. Several computer-assisted programs are geared toward children with APD. They mainly help the brain do ...

  9. A unified framework for the organisation of the primate auditory cortex

    Directory of Open Access Journals (Sweden)

    Simon eBaumann

    2013-04-01

    Full Text Available In nonhuman primates a scheme for the organisation of the auditory cortex is frequently used to localise auditory processes. The scheme allows a common basis for comparison of functional organisation across nonhuman primate species. However, although a body of functional and structural data in nonhuman primates supports an accepted scheme of nearly a dozen neighbouring functional areas, can this scheme be directly applied to humans? Attempts to expand the scheme of auditory cortical fields in humans have been severely hampered by a recent controversy about the organisation of tonotopic maps in humans, centred on two different models with radically different organisation. We point out observations that reconcile the previous models and suggest a distinct model in which the human cortical organisation is much more like that of other primates. This unified framework allows a more robust and detailed comparison of auditory cortex organisation across primate species including humans.

  10. Hearing impairment induces frequency-specific adjustments in auditory spatial tuning in the optic tectum of young owls.

    Science.gov (United States)

    Gold, J I; Knudsen, E I

    1999-11-01

    Bimodal, auditory-visual neurons in the optic tectum of the barn owl are sharply tuned for sound source location. The auditory receptive fields (RFs) of these neurons are restricted in space primarily as a consequence of their tuning for interaural time differences and interaural level differences across broad ranges of frequencies. In this study, we examined the extent to which frequency-specific features of early auditory experience shape the auditory spatial tuning of these neurons. We manipulated auditory experience by implanting in one ear canal an acoustic filtering device that altered the timing and level of sound reaching the eardrum in a frequency-dependent fashion. We assessed the auditory spatial tuning at individual tectal sites in normal owls and in owls raised with the filtering device. At each site, we measured a family of auditory RFs using broadband sound and narrowband sounds with different center frequencies both with and without the device in place. In normal owls, the narrowband RFs for a given site all included a common region of space that corresponded with the broadband RF and aligned with the site's visual RF. Acute insertion of the filtering device in normal owls shifted the locations of the narrowband RFs away from the visual RF, the magnitude and direction of the shifts depending on the frequency of the stimulus. In contrast, in owls that were raised wearing the device, narrowband and broadband RFs were aligned with visual RFs so long as the device was in the ear but not after it was removed, indicating that auditory spatial tuning had been adaptively altered by experience with the device. The frequency tuning of tectal neurons in device-reared owls was also altered from normal. The results demonstrate that experience during development adaptively modifies the representation of auditory space in the barn owl's optic tectum in a frequency-dependent manner.

  11. Reconstruction and analysis of transcription factor-miRNA co-regulatory feed-forward loops in human cancers using filter-wrapper feature selection.

    Directory of Open Access Journals (Sweden)

    Chen Peng

    Full Text Available BACKGROUND: As one of the most common types of co-regulatory motifs, feed-forward loops (FFLs control many cell functions and play an important role in human cancers. Therefore, it is crucial to reconstruct and analyze cancer-related FFLs that are controlled by transcription factor (TF and microRNA (miRNA simultaneously, in order to find out how miRNAs and TFs cooperate with each other in cancer cells and how they contribute to carcinogenesis. Current FFL studies rely on predicted regulation information and therefore suffer the false positive issue in prediction results. More critically, FFLs generated by existing approaches cannot represent the dynamic and conditional regulation relationship under different experimental conditions. METHODOLOGY/PRINCIPAL FINDINGS: In this study, we proposed a novel filter-wrapper feature selection method to accurately identify co-regulatory mechanism by incorporating prior information from predicted regulatory interactions with parallel miRNA/mRNA expression datasets. By applying this method, we reconstructed 208 and 110 TF-miRNA co-regulatory FFLs from human pan-cancer and prostate datasets, respectively. Further analysis of these cancer-related FFLs showed that the top-ranking TF STAT3 and miRNA hsa-let-7e are key regulators implicated in human cancers, which have regulated targets significantly enriched in cellular process regulations and signaling pathways that are involved in carcinogenesis. CONCLUSIONS/SIGNIFICANCE: In this study, we introduced an efficient computational approach to reconstruct co-regulatory FFLs by accurately identifying gene co-regulatory interactions. The strength of the proposed feature selection method lies in the fact it can precisely filter out false positives in predicted regulatory interactions by quantitatively modeling the complex co-regulation of target genes mediated by TFs and miRNAs simultaneously. Moreover, the proposed feature selection method can be generally applied to

  12. Simplified matrix solid phase dispersion procedure for the determination of parabens and benzophenone-ultraviolet filters in human placental tissue samples.

    Science.gov (United States)

    Vela-Soria, F; Rodríguez, I; Ballesteros, O; Zafra-Gómez, A; Ballesteros, L; Cela, R; Navalón, A

    2014-12-05

    In recent decades, the industrial development has resulted in the appearance of a large amount of new chemicals that are able to produce disorders in the human endocrine system. These substances, so-called endocrine disrupting chemicals (EDCs), include many families of compounds, such as parabens and benzophenone-UV filters. Taking into account the demonstrated biological activity of these compounds, it is necessary to develop new analytical procedures to assess the exposure in order to establish, in an accurate way, relationships between EDCs and harmful health effects in population. In the present work, a new method based on a simplified sample treatment by matrix solid phase dispersion (MSPD) followed by ultrahigh performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) analysis, is validated for the determination of four parabens (methyl-, ethyl-, propyl- and butylparaben) and six benzophenone-UV filters (benzophenone-1, benzophenone-2, benzophenone-3, benzophenone-6, benzophenone-8 and 4-hydroxybenzophenone) in human placental tissue samples. The extraction parameters were accurately optimized using multivariate optimization strategies. Ethylparaben ring-13C6 and benzophenone-d10 were used as surrogates. The found limits of quantification ranged from 0.2 to 0.4 ng g(-1) and inter-day variability (evaluated as relative standard deviation) ranged from 5.4% to 12.8%. The method was validated using matrix-matched standard calibration followed by a recovery assay with spiked samples. Recovery rates ranged from 96% to 104%. The method was satisfactorily applied for the determination of compounds in human placental tissue samples collected at the moment of delivery from 10 randomly selected women. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. 基于卡尔曼滤波的运动人体跟踪算法研究%Research on Moving Human Tracking Algorithm Based on Kalman Filter

    Institute of Scientific and Technical Information of China (English)

    乔坤; 郭朝勇; 史进伟

    2012-01-01

    提出一种基于卡尔曼滤波的运动目标快速跟踪算法.利用卡尔曼滤波器的预测功能,预测运动人体目标在下一帧中的位置,在Matlab仿真环境下实现该跟踪算法,实验结果表明:该算法对人体目标的运动趋势能够做出正确的预测估计,跟踪效果和性能较为稳定和可靠.此外,该算法将图像全局搜索问题转换为局部搜索,使运算量减少,满足实时性跟踪要求,实现了对运动目标的快速跟踪.%A real-time moving object tracking algorithm based on Kalman filter is proposed The possible position of the moving human in the next frame is predicted by Kalman filter's predictive function. Based on the Matlab simulation environment to achieve the tracking algorithm and the experimental results show that the algorithm can correctly estimate the human's motion trend and the tracking results and performance is better. In addition, the global searching scope of an image is converted to local scope, thus reduce the computation and meet the requirements of real-time tracking, and the speedy tracking of moving human is realized.

  14. Compression of auditory space during forward self-motion.

    Directory of Open Access Journals (Sweden)

    Wataru Teramoto

    Full Text Available BACKGROUND: Spatial inputs from the auditory periphery can be changed with movements of the head or whole body relative to the sound source. Nevertheless, humans can perceive a stable auditory environment and appropriately react to a sound source. This suggests that the inputs are reinterpreted in the brain, while being integrated with information on the movements. Little is known, however, about how these movements modulate auditory perceptual processing. Here, we investigate the effect of the linear acceleration on auditory space representation. METHODOLOGY/PRINCIPAL FINDINGS: Participants were passively transported forward/backward at constant accelerations using a robotic wheelchair. An array of loudspeakers was aligned parallel to the motion direction along a wall to the right of the listener. A short noise burst was presented during the self-motion from one of the loudspeakers when the listener's physical coronal plane reached the location of one of the speakers (null point. In Experiments 1 and 2, the participants indicated which direction the sound was presented, forward or backward relative to their subjective coronal plane. The results showed that the sound position aligned with the subjective coronal plane was displaced ahead of the null point only during forward self-motion and that the magnitude of the displacement increased with increasing the acceleration. Experiment 3 investigated the structure of the auditory space in the traveling direction during forward self-motion. The sounds were presented at various distances from the null point. The participants indicated the perceived sound location by pointing a rod. All the sounds that were actually located in the traveling direction were perceived as being biased towards the null point. CONCLUSIONS/SIGNIFICANCE: These results suggest a distortion of the auditory space in the direction of movement during forward self-motion. The underlying mechanism might involve anticipatory spatial

  15. An Auditory-Masking-Threshold-Based Noise Suppression Algorithm GMMSE-AMT[ERB] for Listeners with Sensorineural Hearing Loss

    Directory of Open Access Journals (Sweden)

    Hansen John HL

    2005-01-01

    Full Text Available This study describes a new noise suppression scheme for hearing aid applications based on the auditory masking threshold (AMT in conjunction with a modified generalized minimum mean square error estimator (GMMSE for individual subjects with hearing loss. The representation of cochlear frequency resolution is achieved in terms of auditory filter equivalent rectangular bandwidths (ERBs. Estimation of AMT and spreading functions for masking are implemented in two ways: with normal auditory thresholds and normal auditory filter bandwidths (GMMSE-AMT[ERB]-NH and with elevated thresholds and broader auditory filters characteristic of cochlear hearing loss (GMMSE-AMT[ERB]-HI. Evaluation is performed using speech corpora with objective quality measures (segmental SNR, Itakura-Saito, along with formal listener evaluations of speech quality rating and intelligibility. While no measurable changes in intelligibility occurred, evaluations showed quality improvement with both algorithm implementations. However, the customized formulation based on individual hearing losses was similar in performance to the formulation based on the normal auditory system.

  16. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity.

    Science.gov (United States)

    Slugocki, Christopher; Bosnyak, Daniel; Trainor, Laurel J

    2017-03-01

    Recent electrophysiological work has evinced a capacity for plasticity in subcortical auditory nuclei in human listeners. Similar plastic effects have been measured in cortically-generated auditory potentials but it is unclear how the two interact. Here we present Simultaneously-Evoked Auditory Potentials (SEAP), a method designed to concurrently elicit electrophysiological brain potentials from inferior colliculus, thalamus, and primary and secondary auditory cortices. Twenty-six normal-hearing adult subjects (mean 19.26 years, 9 male) were exposed to 2400 monaural (right-ear) presentations of a specially-designed stimulus which consisted of a pure-tone carrier (500 or 600 Hz) that had been amplitude-modulated at the sum of 37 and 81 Hz (depth 100%). Presentation followed an oddball paradigm wherein the pure-tone carrier was set to 500 Hz for 85% of presentations and pseudo-randomly changed to 600 Hz for the remaining 15% of presentations. Single-channel electroencephalographic data were recorded from each subject using a vertical montage referenced to the right earlobe. We show that SEAP elicits a 500 Hz frequency-following response (FFR; generated in inferior colliculus), 80 (subcortical) and 40 (primary auditory cortex) Hz auditory steady-state responses (ASSRs), mismatch negativity (MMN) and P3a (when there is an occasional change in carrier frequency; secondary auditory cortex) in addition to the obligatory N1-P2 complex (secondary auditory cortex). Analyses showed that subcortical and cortical processes are linked as (i) the latency of the FFR predicts the phase delay of the 40 Hz steady-state response, (ii) the phase delays of the 40 and 80 Hz steady-state responses are correlated, and (iii) the fidelity of the FFR predicts the latency of the N1 component. The SEAP method offers a new approach for measuring the dynamic encoding of acoustic features at multiple levels of the auditory pathway. As such, SEAP is a promising tool with which to study how

  17. Designing auditory cues for Parkinson's disease gait rehabilitation.

    Science.gov (United States)

    Cancela, Jorge; Moreno, Eugenio M; Arredondo, Maria T; Bonato, Paolo

    2014-01-01

    Recent works have proved that Parkinson's disease (PD) patients can be largely benefit by performing rehabilitation exercises based on audio cueing and music therapy. Specially, gait can benefit from repetitive sessions of exercises using auditory cues. Nevertheless, all the experiments are based on the use of a metronome as auditory stimuli. Within this work, Human-Computer Interaction methodologies have been used to design new cues that could benefit the long-term engagement of PD patients in these repetitive routines. The study has been also extended to commercial music and musical pieces by analyzing features and characteristics that could benefit the engagement of PD patients to rehabilitation tasks.

  18. Anatomy, Physiology and Function of the Auditory System

    Science.gov (United States)

    Kollmeier, Birger

    The human ear consists of the outer ear (pinna or concha, outer ear canal, tympanic membrane), the middle ear (middle ear cavity with the three ossicles malleus, incus and stapes) and the inner ear (cochlea which is connected to the three semicircular canals by the vestibule, which provides the sense of balance). The cochlea is connected to the brain stem via the eighth brain nerve, i.e. the vestibular cochlear nerve or nervus statoacusticus. Subsequently, the acoustical information is processed by the brain at various levels of the auditory system. An overview about the anatomy of the auditory system is provided by Figure 1.

  19. CRYSTAL FILTER TEST SET

    Science.gov (United States)

    CRYSTAL FILTERS, *HIGH FREQUENCY, *RADIOFREQUENCY FILTERS, AMPLIFIERS, ELECTRIC POTENTIAL, FREQUENCY, IMPEDANCE MATCHING , INSTRUMENTATION, RADIOFREQUENCY, RADIOFREQUENCY AMPLIFIERS, TEST EQUIPMENT, TEST METHODS

  20. Role of the auditory system in speech production.

    Science.gov (United States)

    Guenther, Frank H; Hickok, Gregory

    2015-01-01

    This chapter reviews evidence regarding the role of auditory perception in shaping speech output. Evidence indicates that speech movements are planned to follow auditory trajectories. This in turn is followed by a description of the Directions Into Velocities of Articulators (DIVA) model, which provides a detailed account of the role of auditory feedback in speech motor development and control. A brief description of the higher-order brain areas involved in speech sequencing (including the pre-supplementary motor area and inferior frontal sulcus) is then provided, followed by a description of the Hierarchical State Feedback Control (HSFC) model, which posits internal error detection and correction processes that can detect and correct speech production errors prior to articulation. The chapter closes with a treatment of promising future directions of research into auditory-motor interactions in speech, including the use of intracranial recording techniques such as electrocorticography in humans, the investigation of the potential roles of various large-scale brain rhythms in speech perception and production, and the development of brain-computer interfaces that use auditory feedback to allow profoundly paralyzed users to learn to produce speech using a speech synthesizer.

  1. An auditory feature detection circuit for sound pattern recognition.

    Science.gov (United States)

    Schöneich, Stefan; Kostarakos, Konstantinos; Hedwig, Berthold

    2015-09-01

    From human language to birdsong and the chirps of insects, acoustic communication is based on amplitude and frequency modulation of sound signals. Whereas frequency processing starts at the level of the hearing organs, temporal features of the sound amplitude such as rhythms or pulse rates require processing by central auditory neurons. Besides several theoretical concepts, brain circuits that detect temporal features of a sound signal are poorly understood. We focused on acoustically communicating field crickets and show how five neurons in the brain of females form an auditory feature detector circuit for the pulse pattern of the male calling song. The processing is based on a coincidence detector mechanism that selectively responds when a direct neural response and an intrinsically delayed response to the sound pulses coincide. This circuit provides the basis for auditory mate recognition in field crickets and reveals a principal mechanism of sensory processing underlying the perception of temporal patterns.

  2. The Comparing Auditory Discrimination in Blind and Sighted Subjects

    Directory of Open Access Journals (Sweden)

    Dr. Hassan Ashayeri

    2000-05-01

    Full Text Available Studying auditory discrimination in children and the role it plays in acquiring language skills is of great importance. Also the relationship between articulation disorder and the ability to discriminate the speech sound is an important topic for speech and language researchers. Previous event- related potentials (ERPs studies have suggested a possible participation of the visual cortex of the blind subjects were asked to discriminate 100 couple Farsi words (auditory discrimination tack while they were listening them from recorded tape. The results showed that the blinds were able to discriminate heard material better than sighted subjects. (Prro.05 According to this study in blind subjects conical are as normally reserved for vision may be activated by other sensory modalities. This is in accordance with previous studies. We suggest that auditory cortex expands in blind humans.

  3. Digital filters

    CERN Document Server

    Hamming, Richard W

    1997-01-01

    Digital signals occur in an increasing number of applications: in telephone communications; in radio, television, and stereo sound systems; and in spacecraft transmissions, to name just a few. This introductory text examines digital filtering, the processes of smoothing, predicting, differentiating, integrating, and separating signals, as well as the removal of noise from a signal. The processes bear particular relevance to computer applications, one of the focuses of this book.Readers will find Hamming's analysis accessible and engaging, in recognition of the fact that many people with the s

  4. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Science.gov (United States)

    Afra, Pegah; Funke, Michael; Matsuo, Fumisuke

    2009-01-01

    Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms. PMID:22110319

  5. Pitch-induced responses in the right auditory cortex correlate with musical ability in normal listeners.

    Science.gov (United States)

    Puschmann, Sebastian; Özyurt, Jale; Uppenkamp, Stefan; Thiel, Christiane M

    2013-10-23

    Previous work compellingly shows the existence of functional and structural differences in human auditory cortex related to superior musical abilities observed in professional musicians. In this study, we investigated the relationship between musical abilities and auditory cortex activity in normal listeners who had not received a professional musical education. We used functional MRI to measure auditory cortex responses related to auditory stimulation per se and the processing of pitch and pitch changes, which represents a prerequisite for the perception of musical sequences. Pitch-evoked responses in the right lateral portion of Heschl's gyrus were correlated positively with the listeners' musical abilities, which were assessed using a musical aptitude test. In contrast, no significant relationship was found for noise stimuli, lacking any musical information, and for responses induced by pitch changes. Our results suggest that superior musical abilities in normal listeners are reflected by enhanced neural encoding of pitch information in the auditory system.

  6. Modeling auditory perception of individual hearing-impaired listeners

    DEFF Research Database (Denmark)

    Jepsen, Morten Løve; Dau, Torsten

    showed that, in most cases, the reduced or absent cochlear compression, associated with outer hair-cell loss, quantitatively accounts for broadened auditory filters, while a combination of reduced compression and reduced inner hair-cell function accounts for decreased sensitivity and slower recovery from...... selectivity. Three groups of listeners were considered: (a) normal hearing listeners; (b) listeners with a mild-to-moderate sensorineural hearing loss; and (c) listeners with a severe sensorineural hearing loss. A fixed set of model parameters were derived for each hearing-impaired listener. The simulations...

  7. Weighting of Auditory Feedback Across the English Vowel Space

    OpenAIRE

    Purcell, David; Munhall, Kevin

    2008-01-01

    Auditory feedback in the headphones of talkers was manipulated in the F1 dimension using a real-time vowel formant filtering system. Minimum formant shifts required to elicit a response and the amount of compensation were measured for vowels across the English vowel space. The largest response in production of F1 was observed for the vowel /ε/ and smaller or non-significant changes were found for point vowels. In general, changes in production were of a compensatory nature that reduced the er...

  8. Thermography-based blood flow imaging in human skin of the hands and feet: a spectral filtering approach.

    Science.gov (United States)

    Sagaidachnyi, A A; Fomin, A V; Usanov, D A; Skripal, A V

    2017-02-01

    The determination of the relationship between skin blood flow and skin temperature dynamics is the main problem in thermography-based blood flow imaging. Oscillations in skin blood flow are the source of thermal waves propagating from micro-vessels toward the skin's surface, as assumed in this study. This hypothesis allows us to use equations for the attenuation and dispersion of thermal waves for converting the temperature signal into the blood flow signal, and vice versa. We developed a spectral filtering approach (SFA), which is a new technique for thermography-based blood flow imaging. In contrast to other processing techniques, the SFA implies calculations in the spectral domain rather than in the time domain. Therefore, it eliminates the need to solve differential equations. The developed technique was verified within 0.005-0.1 Hz, including the endothelial, neurogenic and myogenic frequency bands of blood flow oscillations. The algorithm for an inverse conversion of the blood flow signal into the skin temperature signal is addressed. The examples of blood flow imaging of hands during cuff occlusion and feet during heating of the back are illustrated. The processing of infrared (IR) thermograms using the SFA allowed us to restore the blood flow signals and achieve correlations of about 0.8 with a waveform of a photoplethysmographic signal. The prospective applications of the thermography-based blood flow imaging technique include non-contact monitoring of the blood supply during engraftment of skin flaps and burns healing, as well the use of contact temperature sensors to monitor low-frequency oscillations of peripheral blood flow.

  9. Developmental regulation of planar cell polarity and hair-bundle morphogenesis in auditory hair cells: lessons from human and mouse genetics.

    Science.gov (United States)

    Lu, Xiaowei; Sipe, Conor W

    2016-01-01

    Hearing loss is the most common and costly sensory defect in humans and genetic causes underlie a significant proportion of affected individuals. In mammals, sound is detected by hair cells (HCs) housed in the cochlea of the inner ear, whose function depends on a highly specialized mechanotransduction organelle, the hair bundle. Understanding the factors that regulate the development and functional maturation of the hair bundle is crucial for understanding the pathophysiology of human deafness. Genetic analysis of deafness genes in animal models, together with complementary forward genetic screens and conditional knock-out mutations in essential genes, have provided great insights into the molecular machinery underpinning hair-bundle development and function. In this review, we highlight recent advances in our understanding of hair-bundle morphogenesis, with an emphasis on the molecular pathways governing hair-bundle polarity and orientation. We next discuss the proteins and structural elements important for hair-cell mechanotransduction as well as hair-bundle cohesion and maintenance. In addition, developmental signals thought to regulate tonotopic features of HCs are introduced. Finally, novel approaches that complement classic genetics for studying the molecular etiology of human deafness are presented. WIREs Dev Biol 2016, 5:85-101. doi: 10.1002/wdev.202 For further resources related to this article, please visit the WIREs website.

  10. Experimental Evaluation of Auditory Cognition's Effects on Visual Cognition of Video

    Science.gov (United States)

    Kamitani, Tatsuo; Haruki, Kazuhito; Matsuda, Minoru

    This paper presents the experimental evaluation of auditory cognition's effects on visual cognition of video. The influences of seven auditory stimuli on visual recognition are investigated based on experimental data of key-down operations. The key-down operations for locating a moving target by visual and auditory images are monitored by an experiment system originally made by devices including VTR, CRT, Data Recorder, etc.. Regression analysis and EM algorithm are applied to analyzing the experiment data of 350 key-down operations, made with 50 people and 7 auditory stimulus types. The following characteristic results about the influence of auditory stimulus on visual recognition are derived. Firstly, seven people responded too early for every experiment. The average of and the standard deviation of their response times are 439[ms] and 231[ms] respectively. Secondly, the other forty three people responded about 10[ms] after at cases, in which auditory images were presented 30[ms] or 60[ms] before visual images. Also they responded about 10[ms] early at the other cases. Thirdly, as the visual image was dominant information used for the key-down decision making, apparent effects of auditory images on the key-down operation were not measured. Averages and standard deviations of distributions measured by EM algorithm, regarding to 7 auditory stimulus types, are considered and verified with the Card's MHP model of human response.

  11. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  12. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  13. Spectral and temporal processing in rat posterior auditory cortex.

    Science.gov (United States)

    Pandya, Pritesh K; Rathbun, Daniel L; Moucha, Raluca; Engineer, Navzer D; Kilgard, Michael P

    2008-02-01

    The rat auditory cortex is divided anatomically into several areas, but little is known about the functional differences in information processing between these areas. To determine the filter properties of rat posterior auditory field (PAF) neurons, we compared neurophysiological responses to simple tones, frequency modulated (FM) sweeps, and amplitude modulated noise and tones with responses of primary auditory cortex (A1) neurons. PAF neurons have excitatory receptive fields that are on average 65% broader than A1 neurons. The broader receptive fields of PAF neurons result in responses to narrow and broadband inputs that are stronger than A1. In contrast to A1, we found little evidence for an orderly topographic gradient in PAF based on frequency. These neurons exhibit latencies that are twice as long as A1. In response to modulated tones and noise, PAF neurons adapt to repeated stimuli at significantly slower rates. Unlike A1, neurons in PAF rarely exhibit facilitation to rapidly repeated sounds. Neurons in PAF do not exhibit strong selectivity for rate or direction of narrowband one octave FM sweeps. These results indicate that PAF, like nonprimary visual fields, processes sensory information on larger spectral and longer temporal scales than primary cortex.

  14. Convergent Filter Bases

    OpenAIRE

    Coghetto Roland

    2015-01-01

    We are inspired by the work of Henri Cartan [16], Bourbaki [10] (TG. I Filtres) and Claude Wagschal [34]. We define the base of filter, image filter, convergent filter bases, limit filter and the filter base of tails (fr: filtre des sections).

  15. Convergent Filter Bases

    Directory of Open Access Journals (Sweden)

    Coghetto Roland

    2015-09-01

    Full Text Available We are inspired by the work of Henri Cartan [16], Bourbaki [10] (TG. I Filtres and Claude Wagschal [34]. We define the base of filter, image filter, convergent filter bases, limit filter and the filter base of tails (fr: filtre des sections.

  16. Nanophotonic filters for digital imaging

    Science.gov (United States)

    Walls, Kirsty

    There has been an increasing demand for low cost, portable CMOS image sensors because of increased integration, and new applications in the automotive, mobile communication and medical industries, amongst others. Colour reproduction remains imperfect in conventional digital image sensors, due to the limitations of the dye-based filters. Further improvement is required if the full potential of digital imaging is to be realised. In alternative systems, where accurate colour reproduction is a priority, existing equipment is too bulky for anything but specialist use. In this work both these issues are addressed by exploiting nanophotonic techniques to create enhanced trichromatic filters, and multispectral filters, all of which can be fabricated on-chip, i.e. integrated into a conventional digital image sensor, to create compact, low cost, mass produceable imaging systems with accurate colour reproduction. The trichromatic filters are based on plasmonic structures. They exploit the excitation of surface plasmon resonances in arrays of subwavelength holes in metal films to filter light. The currently-known analytical expressions are inadequate for optimising all relevant parameters of a plasmonic structure. In order to obtain arbitrary filter characteristics, an automated design procedure was developed that integrated a genetic algorithm and 3D finite-difference time-domain tool. The optimisation procedure's efficacy is demonstrated by designing a set of plasmonic filters that replicate the CIE (1931) colour matching functions, which themselves mimic the human eye's daytime colour response.

  17. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Perception of Complex Auditory Scenes

    Science.gov (United States)

    2014-07-02

    facility is a 4.3-m diameter geodesic sphere housed in an anechoic chamber. 277 Bose 11-cm full –range loudspeakers are mounted on the surface of the...conduction to loudness judgments, hearing damage risk criteria, and auditory localization. The purpose of this line of research was to develop and

  19. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    2010-01-01

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  20. Auditory Temporal Conditioning in Neonates.

    Science.gov (United States)

    Franz, W. K.; And Others

    Twenty normal newborns, approximately 36 hours old, were tested using an auditory temporal conditioning paradigm which consisted of a slow rise, 75 db tone played for five seconds every 25 seconds, ten times. Responses to the tones were measured by instantaneous, beat-to-beat heartrate; and the test trial was designated as the 2 1/2-second period…

  1. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  2. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  3. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  4. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  5. Molecular approach of auditory neuropathy

    Directory of Open Access Journals (Sweden)

    Magali Aparecida Orate Menezes da Silva

    2015-06-01

    Full Text Available INTRODUCTION: Mutations in the otoferlin gene are responsible for auditory neuropathy. OBJECTIVE: To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. METHODS: This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. RESULTS: The 16 index cases included nine (56% females and seven (44% males. The 13 deaf patients comprised seven (54% males and six (46% females. Among the 20 normal-hearing subjects, 13 (65% were males and seven were (35% females. Thirteen (81% index cases had wild-type genotype (AA and three (19% had the heterozygous AG genotype for IVS8-2A-G (intron 8 mutation. The 5473C-G (exon 44 mutation was found in a heterozygous state (CG in seven (44% index cases and nine (56% had the wild-type allele (CC. Of these mutants, two (25% were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%. CONCLUSION: There are differences at the molecular level in patients with and without auditory neuropathy.

  6. Influence of a preceding auditory stimulus on evoked potential of the succeeding stimulus

    Institute of Scientific and Technical Information of China (English)

    WANG Mingshi; LIU Zhongguo; ZHU Qiang; LIU Jin; WANG Liqun; LIU Haiying

    2004-01-01

    In the present study, we investigated the influence of the preceding auditory stimulus on the auditory-evoked potential (AEP) of the succeeding stimuli, when the human subjects were presented with a pair of auditory stimuli. We found that the evoked potential of the succeeding stimulus was inhibited completely by the preceding stimulus, as the inter-stimulus interval (ISI) was shorter than 150 ms. This influence was dependent on the ISI of two stimuli, the shorter the ISI the stronger the influence would be. The inhibitory influence of the preceding stimulus might be caused by the neural refractory effect.

  7. NOVEL MICROWAVE FILTER DESIGN TECHNIQUES.

    Science.gov (United States)

    ELECTROMAGNETIC WAVE FILTERS, MICROWAVE FREQUENCY, PHASE SHIFT CIRCUITS, BANDPASS FILTERS, TUNED CIRCUITS, NETWORKS, IMPEDANCE MATCHING , LOW PASS FILTERS, MULTIPLEXING, MICROWAVE EQUIPMENT, WAVEGUIDE FILTERS, WAVEGUIDE COUPLERS.

  8. Miniaturized dielectric waveguide filters

    Science.gov (United States)

    Sandhu, Muhammad Y.; Hunter, Ian C.

    2016-10-01

    Design techniques for a new class of integrated monolithic high-permittivity ceramic waveguide filters are presented. These filters enable a size reduction of 50% compared to air-filled transverse electromagnetic filters with the same unloaded Q-factor. Designs for Chebyshev and asymmetric generalised Chebyshev filter and a diplexer are presented with experimental results for an 1800 MHz Chebyshev filter and a 1700 MHz generalised Chebyshev filter showing excellent agreement with theory.

  9. Linear filtering of images based on properties of vision.

    Science.gov (United States)

    Algazi, V R; Ford, G E; Chen, H

    1995-01-01

    The design of linear image filters based on properties of human visual perception has been shown to require the minimization of criterion functions in both the spatial and frequency domains. We extend this approach to continuous filters of infinite support. For lowpass filters, this leads to the concept of an ideal lowpass image filter that provides a response that is superior perceptually to that of the classical ideal lowpass filter.

  10. Quadri-stability of a spatially ambiguous auditory illusion

    Directory of Open Access Journals (Sweden)

    Constance May Bainbridge

    2015-01-01

    Full Text Available In addition to vision, audition plays an important role in sound localization in our world. One way we estimate the motion of an auditory object moving towards or away from us is from changes in volume intensity. However, the human auditory system has unequally distributed spatial resolution, including difficulty distinguishing sounds in front versus behind the listener. Here, we introduce a novel quadri-stable illusion, the Transverse-and-Bounce Auditory Illusion, which combines front-back confusion with changes in volume levels of a nonspatial sound to create ambiguous percepts of an object approaching and withdrawing from the listener. The sound can be perceived as traveling transversely from front to back or back to front, or bouncing to remain exclusively in front of or behind the observer. Here we demonstrate how human listeners experience this illusory phenomenon by comparing ambiguous and unambiguous stimuli for each of the four possible motion percepts. When asked to rate their confidence in perceiving each sound’s motion, participants reported equal confidence for the illusory and unambiguous stimuli. Participants perceived all four illusory motion percepts, and could not distinguish the illusion from the unambiguous stimuli. These results show that this illusion is effectively quadri-stable. In a second experiment, the illusory stimulus was looped continuously in headphones while participants identified its perceived path of motion to test properties of perceptual switching, locking, and biases. Participants were biased towards perceiving transverse compared to bouncing paths, and they became perceptually locked into alternating between front-to-back and back-to-front percepts, perhaps reflecting how auditory objects commonly move in the real world. This multi-stable auditory illusion opens opportunities for studying the perceptual, cognitive, and neural representation of objects in motion, as well as exploring multimodal perceptual

  11. Computational spectrotemporal auditory model with applications to acoustical information processing

    Science.gov (United States)

    Chi, Tai-Shih

    A computational spectrotemporal auditory model based on neurophysiological findings in early auditory and cortical stages is described. The model provides a unified multiresolution representation of the spectral and temporal features of sound likely critical in the perception of timbre. Several types of complex stimuli are used to demonstrate the spectrotemporal information preserved by the model. Shown by these examples, this two stage model reflects the apparent progressive loss of temporal dynamics along the auditory pathway from the rapid phase-locking (several kHz in auditory nerve), to moderate rates of synchrony (several hundred Hz in midbrain), to much lower rates of modulations in the cortex (around 30 Hz). To complete this model, several projection-based reconstruction algorithms are implemented to resynthesize the sound from the representations with reduced dynamics. One particular application of this model is to assess speech intelligibility. The spectro-temporal Modulation Transfer Functions (MTF) of this model is investigated and shown to be consistent with the salient trends in the human MTFs (derived from human detection thresholds) which exhibit a lowpass function with respect to both spectral and temporal dimensions, with 50% bandwidths of about 16 Hz and 2 cycles/octave. Therefore, the model is used to demonstrate the potential relevance of these MTFs to the assessment of speech intelligibility in noise and reverberant conditions. Another useful feature is the phase singularity emerged in the scale space generated by this multiscale auditory model. The singularity is shown to have certain robust properties and carry the crucial information about the spectral profile. Such claim is justified by perceptually tolerable resynthesized sounds from the nonconvex singularity set. In addition, the singularity set is demonstrated to encode the pitch and formants at different scales. These properties make the singularity set very suitable for traditional

  12. Theory of Auditory Thresholds in Primates

    Science.gov (United States)

    Harrison, Michael J.

    2001-03-01

    The influence of thermal pressure fluctuations at the tympanic membrane has been previously investigated as a possible determinant of the threshold of hearing in humans (L.J. Sivian and S.D. White, J. Acoust. Soc. Am. IV, 4;288(1933).). More recent work has focussed more precisely on the relation between statistical mechanics and sensory signal processing by biological means in creatures' brains (W. Bialek, in ``Physics of Biological Systems: from molecules to species'', H. Flyvberg et al, (Eds), p. 252; Springer 1997.). Clinical data on the frequency dependence of hearing thresholds in humans and other primates (W.C. Stebbins, ``The Acoustic Sense of Animals'', Harvard 1983.) has long been available. I have derived an expression for the frequency dependence of hearing thresholds in primates, including humans, by first calculating the frequency dependence of thermal pressure fluctuations at eardrums from damped normal modes excited in model ear canals of given simple geometry. I then show that most of the features of the clinical data are directly related to the frequency dependence of the ratio of thermal noise pressure arising from without to that arising from within the masking bandwidth which signals must dominate in order to be sensed. The higher intensity of threshold signals in primates smaller than humans, which is clinically observed over much but not all of the human auditory spectrum is shown to arise from their smaller meatus dimensions. note

  13. Changes in otoacoustic emissions during selective auditory and visual attention.

    Science.gov (United States)

    Walsh, Kyle P; Pasanen, Edward G; McFadden, Dennis

    2015-05-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing-the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2-3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater.

  14. Gender differences in identifying emotions from auditory and visual stimuli.

    Science.gov (United States)

    Waaramaa, Teija

    2017-12-01

    The present study focused on gender differences in emotion identification from auditory and visual stimuli produced by two male and two female actors. Differences in emotion identification from nonsense samples, language samples and prolonged vowels were investigated. It was also studied whether auditory stimuli can convey the emotional content of speech without visual stimuli, and whether visual stimuli can convey the emotional content of speech without auditory stimuli. The aim was to get a better knowledge of vocal attributes and a more holistic understanding of the nonverbal communication of emotion. Females tended to be more accurate in emotion identification than males. Voice quality parameters played a role in emotion identification in both genders. The emotional content of the samples was best conveyed by nonsense sentences, better than by prolonged vowels or shared native language of the speakers and participants. Thus, vocal non-verbal communication tends to affect the interpretation of emotion even in the absence of language. The emotional stimuli were better recognized from visual stimuli than auditory stimuli by both genders. Visual information about speech may not be connected to the language; instead, it may be based on the human ability to understand the kinetic movements in speech production more readily than the characteristics of the acoustic cues.

  15. Changes in otoacoustic emissions during selective auditory and visual attention

    Science.gov (United States)

    Walsh, Kyle P.; Pasanen, Edward G.; McFadden, Dennis

    2015-01-01

    Previous studies have demonstrated that the otoacoustic emissions (OAEs) measured during behavioral tasks can have different magnitudes when subjects are attending selectively or not attending. The implication is that the cognitive and perceptual demands of a task can affect the first neural stage of auditory processing—the sensory receptors themselves. However, the directions of the reported attentional effects have been inconsistent, the magnitudes of the observed differences typically have been small, and comparisons across studies have been made difficult by significant procedural differences. In this study, a nonlinear version of the stimulus-frequency OAE (SFOAE), called the nSFOAE, was used to measure cochlear responses from human subjects while they simultaneously performed behavioral tasks requiring selective auditory attention (dichotic or diotic listening), selective visual attention, or relative inattention. Within subjects, the differences in nSFOAE magnitude between inattention and attention conditions were about 2–3 dB for both auditory and visual modalities, and the effect sizes for the differences typically were large for both nSFOAE magnitude and phase. These results reveal that the cochlear efferent reflex is differentially active during selective attention and inattention, for both auditory and visual tasks, although they do not reveal how attention is improved when efferent activity is greater. PMID:25994703

  16. Binaural auditory beats affect vigilance performance and mood.

    Science.gov (United States)

    Lane, J D; Kasian, S J; Owens, J E; Marsh, G R

    1998-01-01

    When two tones of slightly different frequency are presented separately to the left and right ears the listener perceives a single tone that varies in amplitude at a frequency equal to the frequency difference between the two tones, a perceptual phenomenon known as the binaural auditory beat. Anecdotal reports suggest that binaural auditory beats within the electroencephalograph frequency range can entrain EEG activity and may affect states of consciousness, although few scientific studies have been published. This study compared the effects of binaural auditory beats in the EEG beta and EEG theta/delta frequency ranges on mood and on performance of a vigilance task to investigate their effects on subjective and objective measures of arousal. Participants (n = 29) performed a 30-min visual vigilance task on three different days while listening to pink noise containing simple tones or binaural beats either in the beta range (16 and 24 Hz) or the theta/delta range (1.5 and 4 Hz). However, participants were kept blind to the presence of binaural beats to control expectation effects. Presentation of beta-frequency binaural beats yielded more correct target detections and fewer false alarms than presentation of theta/delta frequency binaural beats. In addition, the beta-frequency beats were associated with less negative mood. Results suggest that the presentation of binaural auditory beats can affect psychomotor performance and mood. This technology may have applications for the control of attention and arousal and the enhancement of human performance.

  17. Auditory intensity processing: Effect of MRI background noise.

    Science.gov (United States)

    Angenstein, Nicole; Stadler, Jörg; Brechmann, André

    2016-03-01

    Studies on active auditory intensity discrimination in humans showed equivocal results regarding the lateralization of processing. Whereas experiments with a moderate background found evidence for right lateralized processing of intensity, functional magnetic resonance imaging (fMRI) studies with background scanner noise suggest more left lateralized processing. With the present fMRI study, we compared the task dependent lateralization of intensity processing between a conventional continuous echo planar imaging (EPI) sequence with a loud background scanner noise and a fast low-angle shot (FLASH) sequence with a soft background scanner noise. To determine the lateralization of the processing, we employed the contralateral noise procedure. Linearly frequency modulated (FM) tones were presented monaurally with and without contralateral noise. During both the EPI and the FLASH measurement, the left auditory cortex was more strongly involved than the right auditory cortex while participants categorized the intensity of FM tones. This was shown by a strong effect of the additional contralateral noise on the activity in the left auditory cortex. This means a massive reduction in background scanner noise still leads to a significant left lateralized effect. This suggests that the reversed lateralization in fMRI studies with loud background noise in contrast to studies with softer background cannot be fully explained by the MRI background noise.

  18. Relating the Variability of Tone-Burst Otoacoustic Emission and Auditory Brainstem Response Latencies to the Underlying Cochlear Mechanics

    Science.gov (United States)

    Verhulst, Sarah; Shera, Christopher A.

    2016-01-01

    Forward and reverse cochlear latency and its relation to the frequency tuning of the auditory filters can be assessed using tone bursts (TBs). Otoacoustic emissions (TBOAEs) estimate the cochlear roundtrip time, while auditory brainstem responses (ABRs) to the same stimuli aim at measuring the auditory filter buildup time. Latency ratios are generally close to two and controversy exists about the relationship of this ratio to cochlear mechanics. We explored why the two methods provide different estimates of filter buildup time, and ratios with large inter-subject variability, using a time-domain model for OAEs and ABRs. We compared latencies for twenty models, in which all parameters but the cochlear irregularities responsible for reflection-source OAEs were identical, and found that TBOAE latencies were much more variable than ABR latencies. Multiple reflection-sources generated within the evoking stimulus bandwidth were found to shape the TBOAE envelope and complicate the interpretation of TBOAE latency and TBOAE/ABR ratios in terms of auditory filter tuning. PMID:27175040

  19. Determination of human-use pharmaceuticals in filtered water by direct aqueous injection: high-performance liquid chromatography/tandem mass spectrometry

    Science.gov (United States)

    Furlong, Edward T.; Noriega, Mary C.; Kanagy, Christopher J.; Kanagy, Leslie K.; Coffey, Laura J.; Burkhardt, Mark R.

    2014-01-01

    This report describes a method for the determination of 110 human-use pharmaceuticals using a 100-microliter aliquot of a filtered water sample directly injected into a high-performance liquid chromatograph coupled to a triple-quadrupole tandem mass spectrometer using an electrospray ionization source operated in the positive ion mode. The pharmaceuticals were separated by using a reversed-phase gradient of formic acid/ammonium formate-modified water and methanol. Multiple reaction monitoring of two fragmentations of the protonated molecular ion of each pharmaceutical to two unique product ions was used to identify each pharmaceutical qualitatively. The primary multiple reaction monitoring precursor-product ion transition was quantified for each pharmaceutical relative to the primary multiple reaction monitoring precursor-product transition of one of 19 isotope-dilution standard pharmaceuticals or the pesticide atrazine, using an exact stable isotope analogue where possible. Each isotope-dilution standard was selected, when possible, for its chemical similarity to the unlabeled pharmaceutical of interest, and added to the sample after filtration but prior to analysis. Method performance for each pharmaceutical was determined for reagent water, groundwater, treated drinking water, surface water, treated wastewater effluent, and wastewater influent sample matrixes that this method will likely be applied to. Each matrix was evaluated in order of increasing complexity to demonstrate (1) the sensitivity of the method in different water matrixes and (2) the effect of sample matrix, particularly matrix enhancement or suppression of the precursor ion signal, on the quantitative determination of pharmaceutical concentrations. Recovery of water samples spiked (fortified) with the suite of pharmaceuticals determined by this method typically was greater than 90 percent in reagent water, groundwater, drinking water, and surface water. Correction for ambient environmental

  20. Genetics of auditory mechano-electrical transduction.

    Science.gov (United States)

    Michalski, Nicolas; Petit, Christine

    2015-01-01

    The hair bundles of cochlear hair cells play a central role in the auditory mechano-electrical transduction (MET) process. The identification of MET components and of associated molecular complexes by biochemical approaches is impeded by the very small number of hair cells within the cochlea. In contrast, human and mouse genetics have proven to be particularly powerful. The study of inherited forms of deafness led to the discovery of several essential proteins of the MET machinery, which are currently used as entry points to decipher the associated molecular networks. Notably, MET relies not only on the MET machinery but also on several elements ensuring the proper sound-induced oscillation of the hair bundle or the ionic environment necessary to drive the MET current. Here, we review the most significant advances in the molecular bases of the MET process that emerged from the genetics of hearing.

  1. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  2. Auditory adaptation improves tactile frequency perception.

    Science.gov (United States)

    Crommett, Lexi E; Pérez-Bellido, Alexis; Yau, Jeffrey M

    2017-01-11

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals shape tactile processing is unclear: perceptual interactions between contemporaneous sounds and vibrations are consistent with multiple neural mechanisms. Here we used a crossmodal adaptation paradigm, which separated auditory and tactile stimulation in time, to test the hypothesis that tactile frequency perception depends on neural circuits that also process auditory frequency. We reasoned that auditory adaptation effects would transfer to touch only if signals from both senses converge on common representations. We found that auditory adaptation can improve tactile frequency discrimination thresholds. This occurred only when adaptor and test frequencies overlapped. In contrast, auditory adaptation did not influence tactile intensity judgments. Thus, auditory adaptation enhances touch in a frequency- and feature-specific manner. A simple network model in which tactile frequency information is decoded from sensory neurons that are susceptible to auditory adaptation recapitulates these behavioral results. Our results imply that the neural circuits supporting tactile frequency perception also process auditory signals. This finding is consistent with the notion of supramodal operators performing canonical operations, like temporal frequency processing, regardless of input modality.

  3. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  4. Cutaneous sensory nerve as a substitute for auditory nerve in solving deaf-mutes’ hearing problem: an innovation in multi-channel-array skin-hearing technology

    OpenAIRE

    Li, Jianwen; Li, Yan; Ming ZHANG; Ma, Weifang; Ma, Xuezong

    2014-01-01

    The current use of hearing aids and artificial cochleas for deaf-mute individuals depends on their auditory nerve. Skin-hearing technology, a patented system developed by our group, uses a cutaneous sensory nerve to substitute for the auditory nerve to help deaf-mutes to hear sound. This paper introduces a new solution, multi-channel-array skin-hearing technology, to solve the problem of speech discrimination. Based on the filtering principle of hair cells, external voice signals at different...

  5. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Science.gov (United States)

    Huang, Juan; Gamble, Darik; Sarnlertsophon, Kristine; Wang, Xiaoqin; Hsiao, Steven

    2012-01-01

    Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms) and 'triple' (waltz-like rhythms) presented in three conditions: 1) Unimodal inputs (auditory or tactile alone), 2) Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3) Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85%) when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90%) when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%). Performance drops dramatically when subjects were presented with incongruent auditory cues (10%), as opposed to incongruent tactile cues (60%), demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  6. Feeling music: integration of auditory and tactile inputs in musical meter perception.

    Directory of Open Access Journals (Sweden)

    Juan Huang

    Full Text Available Musicians often say that they not only hear, but also "feel" music. To explore the contribution of tactile information in "feeling" musical rhythm, we investigated the degree that auditory and tactile inputs are integrated in humans performing a musical meter recognition task. Subjects discriminated between two types of sequences, 'duple' (march-like rhythms and 'triple' (waltz-like rhythms presented in three conditions: 1 Unimodal inputs (auditory or tactile alone, 2 Various combinations of bimodal inputs, where sequences were distributed between the auditory and tactile channels such that a single channel did not produce coherent meter percepts, and 3 Simultaneously presented bimodal inputs where the two channels contained congruent or incongruent meter cues. We first show that meter is perceived similarly well (70%-85% when tactile or auditory cues are presented alone. We next show in the bimodal experiments that auditory and tactile cues are integrated to produce coherent meter percepts. Performance is high (70%-90% when all of the metrically important notes are assigned to one channel and is reduced to 60% when half of these notes are assigned to one channel. When the important notes are presented simultaneously to both channels, congruent cues enhance meter recognition (90%. Performance drops dramatically when subjects were presented with incongruent auditory cues (10%, as opposed to incongruent tactile cues (60%, demonstrating that auditory input dominates meter perception. We believe that these results are the first demonstration of cross-modal sensory grouping between any two senses.

  7. Musical experience sharpens human cochlear tuning.

    Science.gov (United States)

    Bidelman, Gavin M; Nelms, Caitlin; Bhagat, Shaum P

    2016-05-01

    The mammalian cochlea functions as a filter bank that performs a spectral, Fourier-like decomposition on the acoustic signal. While tuning can be compromised (e.g., broadened with hearing impairment), whether or not human cochlear frequency resolution can be sharpened through experiential factors (e.g., training or learning) has not yet been established. Previous studies have demonstrated sharper psychophysical tuning curves in trained musicians compared to nonmusicians, implying superior peripheral tuning. However, these findings are based on perceptual masking paradigms, and reflect engagement of the entire auditory system rather than cochlear tuning, per se. Here, by directly mapping physiological tuning curves from stimulus frequency otoacoustic emissions (SFOAEs)-cochlear emitted sounds-we show that estimates of human cochlear tuning in a high-frequency cochlear region (4 kHz) is further sharpened (by a factor of 1.5×) in musicians and improves with the number of years of their auditory training. These findings were corroborated by measurements of psychophysical tuning curves (PTCs) derived via simultaneous masking, which similarly showed sharper tuning in musicians. Comparisons between SFOAE and PTCs revealed closer correspondence between physiological and behavioral curves in musicians, indicating that tuning is also more consistent between different levels of auditory processing in trained ears. Our findings demonstrate an experience-dependent enhancement in the resolving power of the cochlear sensory epithelium and the spectral resolution of human hearing and provide a peripheral account for the auditory perceptual benefits observed in musicians. Both local and feedback (e.g., medial olivocochlear efferent) mechanisms are discussed as potential mechanisms for experience-dependent tuning. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Caution and Warning Alarm Design and Evaluation for NASA CEV Auditory Displays: SHFE Information Presentation Directed Research Project (DRPP) report 12.07

    Science.gov (United States)

    Begault, Durand R.; Godfroy, Martine; Sandor, Aniko; Holden, Kritina

    2008-01-01

    The design of caution-warning signals for NASA s Crew Exploration Vehicle (CEV) and other future spacecraft will be based on both best practices based on current research and evaluation of current alarms. A design approach is presented based upon cross-disciplinary examination of psychoacoustic research, human factors experience, aerospace practices, and acoustical engineering requirements. A listening test with thirteen participants was performed involving ranking and grading of current and newly developed caution-warning stimuli under three conditions: (1) alarm levels adjusted for compliance with ISO 7731, "Danger signals for work places - Auditory Danger Signals", (2) alarm levels adjusted to an overall 15 dBA s/n ratio and (3) simulated codec low-pass filtering. Questionnaire data yielded useful insights regarding cognitive associations with the sounds.

  9. Filter quality of pleated filter cartridges.

    Science.gov (United States)

    Chen, Chun-Wan; Huang, Sheng-Hsiu; Chiang, Che-Ming; Hsiao, Ta-Chih; Chen, Chih-Chieh

    2008-04-01

    The performance of dust cartridge filters commonly used in dust masks and in room ventilation depends both on the collection efficiency of the filter material and the pressure drop across the filter. Currently, the optimization of filter design is based only on minimizing the pressure drop at a set velocity chosen by the manufacturer. The collection efficiency, an equally important factor, is rarely considered in the optimization process. In this work, a filter quality factor, which combines the collection efficiency and the pressure drop, is used as the optimization criterion for filter evaluation. Most respirator manufacturers pleat the filter to various extents to increase the filtration area in the limit space within the dust cartridge. Six sizes of filter holders were fabricated to hold just one pleat of filter, simulating six different pleat counts, ranging from 0.5 to 3.33 pleats cm(-1). The possible electrostatic charges on the filter were removed by dipping in isopropyl alcohol, and the air velocity is fixed at 100 cm s(-1). Liquid dicotylphthalate particles generated by a constant output atomizer were used as challenge aerosols to minimize particle loading effects. A scanning mobility particle sizer was used to measure the challenge aerosol number concentrations and size distributions upstream and downstream of the pleated filter. The pressure drop across the filter was monitored by using a calibrated pressure transducer. The results showed that the performance of pleated filters depend not only on the size of the particle but also on the pleat count of the pleated filter. Based on filter quality factor, the optimal pleat count (OPC) is always higher than that based on pressure drop by about 0.3-0.5 pleats cm(-1). For example, the OPC is 2.15 pleats cm(-1) from the standpoint of pressure drop, but for the highest filter quality factor, the pleated filter needed to have a pleat count of 2.65 pleats cm(-1) at particle diameter of 122 nm. From the aspect of

  10. Effects of Amplitude Compression on Relative Auditory Distance Perception

    Science.gov (United States)

    2013-10-01

    human sound localization (pp. 36-200). Cambridge, MA: The MIT Press. Carmichel, E. L., Harris, F. P., & Story, B. H. (2007). Effects of binaural ...auditory distance perception by reducing the level differences between sounds . The focus of the present study was to investigate the effect of amplitude...create stimuli. Two levels of amplitude compression were applied to the recordings through Adobe Audition sound editing software to simulate military

  11. Infant Auditory Processing and Event-related Brain Oscillations

    Science.gov (United States)

    Musacchia, Gabriella; Ortiz-Mantilla, Silvia; Realpe-Bonilla, Teresa; Roesler, Cynthia P.; Benasich, April A.

    2015-01-01

    Rapid auditory processing and acoustic change detection abilities play a critical role in allowing human infants to efficiently process the fine spectral and temporal changes that are characteristic of human language. These abilities lay the foundation for effective language acquisition; allowing infants to hone in on the sounds of their native language. Invasive procedures in animals and scalp-recorded potentials from human adults suggest that simultaneous, rhythmic activity (oscillations) between and within brain regions are fundamental to sensory development; determining the resolution with which incoming stimuli are parsed. At this time, little is known about oscillatory dynamics in human infant development. However, animal neurophysiology and adult EEG data provide the basis for a strong hypothesis that rapid auditory processing in infants is mediated by oscillatory synchrony in discrete frequency bands. In order to investigate this, 128-channel, high-density EEG responses of 4-month old infants to frequency change in tone pairs, presented in two rate conditions (Rapid: 70 msec ISI and Control: 300 msec ISI) were examined. To determine the frequency band and magnitude of activity, auditory evoked response averages were first co-registered with age-appropriate brain templates. Next, the principal components of the response were identified and localized using a two-dipole model of brain activity. Single-trial analysis of oscillatory power showed a robust index of frequency change processing in bursts of Theta band (3 - 8 Hz) activity in both right and left auditory cortices, with left activation more prominent in the Rapid condition. These methods have produced data that are not only some of the first reported evoked oscillations analyses in infants, but are also, importantly, the product of a well-established method of recording and analyzing clean, meticulously collected, infant EEG and ERPs. In this article, we describe our method for infant EEG net

  12. An Improved Dissonance Measure Based on Auditory Memory

    DEFF Research Database (Denmark)

    Jensen, Kristoffer; Hjortkjær, Jens

    2012-01-01

    Dissonance is an important feature in music audio analysis. We present here a dissonance model that accounts for the temporal integration of dissonant events in auditory short term memory. We compare the memory-based dissonance extracted from musical audio sequences to the response of human...... listeners. In a number of tests, the memory model predicts listener’s response better than traditional dissonance measures....

  13. Auditory Pattern Memory

    Science.gov (United States)

    1990-10-31

    Creelman , 1962; Getty, 1975; Divenyi and Danner, kin et aL (1982) jitter-detection paradigm. An advantage of 1977; Divenyi and Sachs, 1978; and Allen...discrimination of tonal se- Creelman . C. D. (1962). "’Human discrimination ofauditory duration." 3. quences.’" J. Acoust. Soc. Am. 82.1218-1226. Acoust...single marked intervals (Abel, 1972a,b; Creelman , 1962; Getty, 1975; Divenyi and Danner, 1977; Divenyi and Sachs, 1978; Espinoza-Varas and Jamieson

  14. Composing morphological filters

    NARCIS (Netherlands)

    H.J.A.M. Heijmans (Henk)

    1995-01-01

    textabstractA morphological filter is an operator on a complete lattice which is increasing and idempotent. Two well-known classes of morphological filters are openings and closings. Furthermore, an interesting class of filters, the alternating sequential filters, is obtained if one composes openin

  15. Composing morphological filters

    NARCIS (Netherlands)

    Heijmans, H.J.A.M.

    1995-01-01

    A morphological filter is an operator on a complete lattice which is increasing and idempotent. Two well-known classes of morphological filters are openings and closings. Furthermore, an interesting class of filters, the alternating sequential filters, is obtained if one composes openings and closi

  16. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  17. Perception of stochastically undersampled sound waveforms: A model of auditory deafferentation

    Directory of Open Access Journals (Sweden)

    Enrique A Lopez-Poveda

    2013-07-01

    Full Text Available Auditory deafferentation, or permanent loss of auditory nerve afferent terminals, occurs after noise overexposure and aging and may accompany many forms of hearing loss. It could cause significant auditory impairment but is undetected by regular clinical tests and so its effects on perception are poorly understood. Here, we hypothesize and test a neural mechanism by which deafferentation could deteriorate perception. The basic idea is that the spike train produced by each auditory afferent resembles a stochastically digitized version of the sound waveform and that the quality of the waveform representation in the whole nerve depends on the number of aggregated spike trains or auditory afferents. We reason that because spikes occur stochastically in time with a higher probability for high- than for low-intensity sounds, more afferents would be required for the nerve to faithfully encode high-frequency or low-intensity waveform features than low-frequency or high-intensity features. Deafferentation would thus degrade the encoding of these features. We further reason that due to the stochastic nature of nerve firing, the degradation would be greater in noise than in quiet. This hypothesis is tested using a vocoder. Sounds were filtered through ten adjacent frequency bands. For the signal in each band, multiple stochastically subsampled copies were obtained to roughly mimic different stochastic representations of that signal conveyed by different auditory afferents innervating a given cochlear region. These copies were then aggregated to obtain an acoustic stimulus. Tone detection and speech identification tests were performed by young, normal-hearing listeners using different numbers of stochastic samplers per frequency band in the vocoder. Results support the hypothesis that stochastic undersampling of the sound waveform, inspired by deafferentation, impairs speech perception in noise more than in quiet, consistent with auditory aging effects.

  18. Development of auditory-vocal perceptual skills in songbirds.

    Directory of Open Access Journals (Sweden)

    Vanessa C Miller-Sims

    Full Text Available Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  19. Development of auditory-vocal perceptual skills in songbirds.

    Science.gov (United States)

    Miller-Sims, Vanessa C; Bottjer, Sarah W

    2012-01-01

    Songbirds are one of the few groups of animals that learn the sounds used for vocal communication during development. Like humans, songbirds memorize vocal sounds based on auditory experience with vocalizations of adult "tutors", and then use auditory feedback of self-produced vocalizations to gradually match their motor output to the memory of tutor sounds. In humans, investigations of early vocal learning have focused mainly on perceptual skills of infants, whereas studies of songbirds have focused on measures of vocal production. In order to fully exploit songbirds as a model for human speech, understand the neural basis of learned vocal behavior, and investigate links between vocal perception and production, studies of songbirds must examine both behavioral measures of perception and neural measures of discrimination during development. Here we used behavioral and electrophysiological assays of the ability of songbirds to distinguish vocal calls of varying frequencies at different stages of vocal learning. The results show that neural tuning in auditory cortex mirrors behavioral improvements in the ability to make perceptual distinctions of vocal calls as birds are engaged in vocal learning. Thus, separate measures of neural discrimination and behavioral perception yielded highly similar trends during the course of vocal development. The timing of this improvement in the ability to distinguish vocal sounds correlates with our previous work showing substantial refinement of axonal connectivity in cortico-basal ganglia pathways necessary for vocal learning.

  20. Upregulated expression of La ribonucleoprotein domain family member 6 and collagen type I gene following water-filtered broad-spectrum near-infrared irradiation in a 3-dimensional human epidermal tissue culture model as revealed by microarray analysis.

    Science.gov (United States)

    Tanaka, Yohei; Nakayama, Jun

    2017-02-27

    Water-filtered broad-spectrum near-infrared irradiation can induce various biological effects, as our previous clinical, histological, and biochemical investigations have shown. However, few studies that examined the changes thus induced in gene expression. The aim was to investigate the changes in gene expression in a 3-dimensional reconstructed epidermal tissue culture exposed to water-filtered broad-spectrum near-infrared irradiation. DNA microarray and quantitative real-time polymerase chain reaction (PCR) analysis was used to assess gene expression levels in a 3-dimensional reconstructed epidermal model composed of normal human epidermal cells exposed to water-filtered broad-spectrum near-infrared irradiation. The water filter allowed 1000-1800 nm wavelengths and excluded 1400-1500 nm wavelengths, and cells were exposed to 5 or 10 rounds of near-infrared irradiation at 10 J/cm(2) . A DNA microarray with over 50 000 different probes showed 18 genes that were upregulated or downregulated by at least twofold after irradiation. Quantitative real-time PCR revealed that, relative to control cells, the gene encoding La ribonucleoprotein domain family member 6 (LARP6), which regulates collagen expression, was significantly and dose-dependently upregulated (P < 0.05) by water-filtered broad-spectrum near-infrared exposure. Gene encoding transcripts of collagen type I were significantly upregulated compared with controls (P < 0.05). This study demonstrates the ability of water-filtered broad-spectrum near-infrared irradiation to stimulate the production of type I collagen. © 2017 The Australasian College of Dermatologists.

  1. Passive Power Filters

    CERN Document Server

    Künzi, R

    2015-01-01

    Power converters require passive low-pass filters which are capable of reducing voltage ripples effectively. In contrast to signal filters, the components of power filters must carry large currents or withstand large voltages, respectively. In this paper, three different suitable filter struc tures for d.c./d.c. power converters with inductive load are introduced. The formulas needed to calculate the filter components are derived step by step and practical examples are given. The behaviour of the three discussed filters is compared by means of the examples. P ractical aspects for the realization of power filters are also discussed.

  2. The effects of speech motor preparation on auditory perception

    Science.gov (United States)

    Myers, John

    Perception and action are coupled via bidirectional relationships between sensory and motor systems. Motor systems influence sensory areas by imparting a feedforward influence on sensory processing termed "motor efference copy" (MEC). MEC is suggested to occur in humans because speech preparation and production modulate neural measures of auditory cortical activity. However, it is not known if MEC can affect auditory perception. We tested the hypothesis that during speech preparation auditory thresholds will increase relative to a control condition, and that the increase would be most evident for frequencies that match the upcoming vocal response. Participants performed trials in a speech condition that contained a visual cue indicating a vocal response to prepare (one of two frequencies), followed by a go signal to speak. To determine threshold shifts, voice-matched or -mismatched pure tones were presented at one of three time points between the cue and target. The control condition was the same except the visual cues did not specify a response and subjects did not speak. For each participant, we measured f0 thresholds in isolation from the task in order to establish baselines. Results indicated that auditory thresholds were highest during speech preparation, relative to baselines and a non-speech control condition, especially at suprathreshold levels. Thresholds for tones that matched the frequency of planned responses gradually increased over time, but sharply declined for the mismatched tones shortly before targets. Findings support the hypothesis that MEC influences auditory perception by modulating thresholds during speech preparation, with some specificity relative to the planned response. The threshold increase in tasks vs. baseline may reflect attentional demands of the tasks.

  3. From ear to body: the auditory-motor loop in spatial cognition

    Directory of Open Access Journals (Sweden)

    Isabelle eViaud-Delmon

    2014-09-01

    Full Text Available Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers was used to send the coordinates of the subject’s head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e. a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorise the localisation of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed.The configuration of searching paths allowed observing how auditory information was coded to memorise the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favour of the hypothesis that the brain has access to a modality-invariant representation of external space.

  4. Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography

    Directory of Open Access Journals (Sweden)

    Inyong eChoi

    2013-04-01

    Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.

  5. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    dynamics of human perception in auditory streaming. PMID:23516340

  6. Auditory evoked potentials and multiple sclerosis

    OpenAIRE

    Carla Gentile Matas; Sandro Luiz de Andrade Matas; Caroline Rondina Salzano de Oliveira; Isabela Crivellaro Gonçalves

    2010-01-01

    Multiple sclerosis (MS) is an inflammatory, demyelinating disease that can affect several areas of the central nervous system. Damage along the auditory pathway can alter its integrity significantly. Therefore, it is important to investigate the auditory pathway, from the brainstem to the cortex, in individuals with MS. OBJECTIVE: The aim of this study was to characterize auditory evoked potentials in adults with MS of the remittent-recurrent type. METHOD: The study comprised 25 individuals w...

  7. Auditory Training and Its Effects upon the Auditory Discrimination and Reading Readiness of Kindergarten Children.

    Science.gov (United States)

    Cullen, Minga Mustard

    The purpose of this investigation was to evaluate the effects of a systematic auditory training program on the auditory discrimination ability and reading readiness of 55 white, middle/upper middle class kindergarten students. Following pretesting with the "Wepman Auditory Discrimination Test,""The Clymer-Barrett Prereading Battery," and the…

  8. Effects of Methylphenidate (Ritalin) on Auditory Performance in Children with Attention and Auditory Processing Disorders.

    Science.gov (United States)

    Tillery, Kim L.; Katz, Jack; Keller, Warren D.

    2000-01-01

    A double-blind, placebo-controlled study examined effects of methylphenidate (Ritalin) on auditory processing in 32 children with both attention deficit hyperactivity disorder and central auditory processing (CAP) disorder. Analyses revealed that Ritalin did not have a significant effect on any of the central auditory processing measures, although…

  9. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  10. Central auditory function of deafness genes.

    Science.gov (United States)

    Willaredt, Marc A; Ebbers, Lena; Nothwang, Hans Gerd

    2014-06-01

    The highly variable benefit of hearing devices is a serious challenge in auditory rehabilitation. Various factors contribute to this phenomenon such as the diversity in ear defects, the different extent of auditory nerve hypoplasia, the age of intervention, and cognitive abilities. Recent analyses indicate that, in addition, central auditory functions of deafness genes have to be considered in this context. Since reduced neuronal activity acts as the common denominator in deafness, it is widely assumed that peripheral deafness influences development and function of the central auditory system in a stereotypical manner. However, functional characterization of transgenic mice with mutated deafness genes demonstrated gene-specific abnormalities in the central auditory system as well. A frequent function of deafness genes in the central auditory system is supported by a genome-wide expression study that revealed significant enrichment of these genes in the transcriptome of the auditory brainstem compared to the entire brain. Here, we will summarize current knowledge of the diverse central auditory functions of deafness genes. We furthermore propose the intimately interwoven gene regulatory networks governing development of the otic placode and the hindbrain as a mechanistic explanation for the widespread expression of these genes beyond the cochlea. We conclude that better knowledge of central auditory dysfunction caused by genetic alterations in deafness genes is required. In combination with improved genetic diagnostics becoming currently available through novel sequencing technologies, this information will likely contribute to better outcome prediction of hearing devices.

  11. A corticostriatal neural system enhances auditory perception through temporal context processing.

    Science.gov (United States)

    Geiser, Eveline; Notter, Michael; Gabrieli, John D E

    2012-05-02

    The temporal context of an acoustic signal can greatly influence its perception. The present study investigated the neural correlates underlying perceptual facilitation by regular temporal contexts in humans. Participants listened to temporally regular (periodic) or temporally irregular (nonperiodic) sequences of tones while performing an intensity discrimination task. Participants performed significantly better on intensity discrimination during periodic than nonperiodic tone sequences. There was greater activation in the putamen for periodic than nonperiodic sequences. Conversely, there was greater activation in bilateral primary and secondary auditory cortices (planum polare and planum temporale) for nonperiodic than periodic sequences. Across individuals, greater putamen activation correlated with lesser auditory cortical activation in both right and left hemispheres. These findings suggest that temporal regularity is detected in the putamen, and that such detection facilitates temporal-lobe cortical processing associated with superior auditory perception. Thus, this study reveals a corticostriatal system associated with contextual facilitation for auditory perception through temporal regularity processing.

  12. Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf.

    Science.gov (United States)

    Almeida, Jorge; He, Dongjun; Chen, Quanjing; Mahon, Bradford Z; Zhang, Fan; Gonçalves, Óscar F; Fang, Fang; Bi, Yanchao

    2015-11-01

    Sensory cortices of individuals who are congenitally deprived of a sense can exhibit considerable plasticity and be recruited to process information from the senses that remain intact. Here, we explored whether the auditory cortex of congenitally deaf individuals represents visual field location of a stimulus-a dimension that is represented in early visual areas. We used functional MRI to measure neural activity in auditory and visual cortices of congenitally deaf and hearing humans while they observed stimuli typically used for mapping visual field preferences in visual cortex. We found that the location of a visual stimulus can be successfully decoded from the patterns of neural activity in auditory cortex of congenitally deaf but not hearing individuals. This is particularly true for locations within the horizontal plane and within peripheral vision. These data show that the representations stored within neuroplastically changed auditory cortex can align with dimensions that are typically represented in visual cortex.

  13. Experience-dependent learning of auditory temporal resolution: evidence from Carnatic-trained musicians.

    Science.gov (United States)

    Mishra, Srikanta K; Panda, Manasa R

    2014-01-22

    Musical training and experience greatly enhance the cortical and subcortical processing of sounds, which may translate to superior auditory perceptual acuity. Auditory temporal resolution is a fundamental perceptual aspect that is critical for speech understanding in noise in listeners with normal hearing, auditory disorders, cochlear implants, and language disorders, yet very few studies have focused on music-induced learning of temporal resolution. This report demonstrates that Carnatic musical training and experience have a significant impact on temporal resolution assayed by gap detection thresholds. This experience-dependent learning in Carnatic-trained musicians exhibits the universal aspects of human perception and plasticity. The present work adds the perceptual component to a growing body of neurophysiological and imaging studies that suggest plasticity of the peripheral auditory system at the level of the brainstem. The present work may be intriguing to researchers and clinicians alike interested in devising cross-cultural training regimens to alleviate listening-in-noise difficulties.

  14. Optimizing the imaging of the monkey auditory cortex: sparse vs. continuous fMRI.

    Science.gov (United States)

    Petkov, Christopher I; Kayser, Christoph; Augath, Mark; Logothetis, Nikos K

    2009-10-01

    The noninvasive imaging of the monkey auditory system with functional magnetic resonance imaging (fMRI) can bridge the gap between electrophysiological studies in monkeys and imaging studies in humans. Some of the recent imaging of monkey auditory cortical and subcortical structures relies on a technique of "sparse imaging," which was developed in human studies to sidestep the negative influence of scanner noise by adding periods of silence in between volume acquisition. Among the various aspects that have gone into the ongoing optimization of fMRI of the monkey auditory cortex, replacing the more common continuous-imaging paradigm with sparse imaging seemed to us to make the most obvious difference in the amount of activity that we could reliably obtain from awake or anesthetized animals. Here, we directly compare the sparse- and continuous-imaging paradigms in anesthetized animals. We document a strikingly greater auditory response with sparse imaging, both quantitatively and qualitatively, which includes a more expansive and robust tonotopic organization. There were instances where continuous imaging could better reveal organizational properties that sparse imaging missed, such as aspects of the hierarchical organization of auditory cortex. We consider the choice of imaging paradigm as a key component in optimizing the fMRI of the monkey auditory cortex.

  15. Modulation of auditory percepts by transcutaneous electrical stimulation.

    Science.gov (United States)

    Ueberfuhr, Margarete Anna; Braun, Amalia; Wiegrebe, Lutz; Grothe, Benedikt; Drexl, Markus

    2017-07-01

    Transcutaneous, electrical stimulation with electrodes placed on the mastoid processes represents a specific way to elicit vestibular reflexes in humans without active or passive subject movements, for which the term galvanic vestibular stimulation was coined. It has been suggested that galvanic vestibular stimulation mainly affects the vestibular periphery, but whether vestibular hair cells, vestibular afferents, or a combination of both are excited, is still a matter of debate. Galvanic vestibular stimulation has been in use since the late 18th century, but despite the long-known and well-documented effects on the vestibular system, reports of the effect of electrical stimulation on the adjacent cochlea or the ascending auditory pathway are surprisingly sparse. The present study examines the effect of transcutaneous, electrical stimulation of the human auditory periphery employing evoked and spontaneous otoacoustic emissions and several psychoacoustic measures. In particular, level growth functions of distortion product otoacoustic emissions were recorded during electrical stimulation with alternating currents (2 Hz, 1-4 mA in 1 mA-steps). In addition, the level and frequency of spontaneous otoacoustic emissions were followed before, during, and after electrical stimulation (2 Hz, 1-4 mA). To explore the effect of electrical stimulation on the retrocochlear level (i.e. on the ascending auditory pathway beyond the cochlea), psychoacoustic experiments were carried out. Specifically, participants indicated whether electrical stimulation (4 Hz, 2 and 3 mA) induced amplitude modulations of the perception of a pure tone, and of auditory illusions after presentation of either an intense, low-frequency sound (Bounce tinnitus) or a faint band-stop noise (Zwicker tone). These three psychoacoustic measures revealed significant perceived amplitude modulations during electrical stimulation in the majority of participants. However, no significant changes of evoked and

  16. Method of securing filter elements

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Erik P.; Haslam, Jeffery L.; Mitchell, Mark A.

    2016-10-04

    A filter securing system including a filter unit body housing; at least one tubular filter element positioned in the filter unit body housing, the tubular filter element having a closed top and an open bottom; a dimple in either the filter unit body housing or the top of the tubular filter element; and a socket in either the filter unit body housing or the top of the tubular filter element that receives the dimple in either the filter unit body housing or the top of the tubular filter element to secure the tubular filter element to the filter unit body housing.

  17. Influence of different envelope maskers on signal recognition and neuronal representation in the auditory system of a grasshopper.

    Directory of Open Access Journals (Sweden)

    Daniela Neuhofer

    Full Text Available BACKGROUND: Animals that communicate by sound face the problem that the signals arriving at the receiver often are degraded and masked by noise. Frequency filters in the receiver's auditory system may improve the signal-to-noise ratio (SNR by excluding parts of the spectrum which are not occupied by the species-specific signals. This solution, however, is hardly amenable to species that produce broad band signals or have ears with broad frequency tuning. In mammals auditory filters exist that work in the temporal domain of amplitude modulations (AM. Do insects also use this type of filtering? PRINCIPAL FINDINGS: Combining behavioural and neurophysiological experiments we investigated whether AM filters may improve the recognition of masked communication signals in grasshoppers. The AM pattern of the sound, its envelope, is crucial for signal recognition in these animals. We degraded the species-specific song by adding random fluctuations to its envelope. Six noise bands were used that differed in their overlap with the spectral content of the song envelope. If AM filters contribute to reduced masking, signal recognition should depend on the degree of overlap between the song envelope spectrum and the noise spectra. Contrary to this prediction, the resistance against signal degradation was the same for five of six masker bands. Most remarkably, the band with the strongest frequency overlap to the natural song envelope (0-100 Hz impaired acceptance of degraded signals the least. To assess the noise filter capacities of single auditory neurons, the changes of spike trains as a function of the masking level were assessed. Increasing levels of signal degradation in different frequency bands led to similar changes in the spike trains in most neurones. CONCLUSIONS: There is no indication that auditory neurones of grasshoppers are specialized to improve the SNR with respect to the pattern of amplitude modulations.

  18. Spatiotemporal properties of the BOLD response in the songbirds' auditory circuit during a variety of listening tasks.

    Science.gov (United States)

    Van Meir, Vincent; Boumans, Tiny; De Groof, Geert; Van Audekerke, Johan; Smolders, Alain; Scheunders, Paul; Sijbers, Jan; Verhoye, Marleen; Balthazart, Jacques; Van der Linden, Annemie

    2005-05-01

    Auditory fMRI in humans has recently received increasing attention from cognitive neuroscientists as a tool to understand mental processing of learned acoustic sequences and analyzing speech recognition and development of musical skills. The present study introduces this tool in a well-documented animal model for vocal learning, the songbird, and provides fundamental insight in the main technical issues associated with auditory fMRI in these songbirds. Stimulation protocols with various listening tasks lead to appropriate activation of successive relays in the songbirds' auditory pathway. The elicited BOLD response is also region and stimulus specific, and its temporal aspects provide accurate measures of the changes in brain physiology induced by the acoustic stimuli. Extensive repetition of an identical stimulus does not lead to habituation of the response in the primary or secondary telencephalic auditory regions of anesthetized subjects. The BOLD signal intensity changes during a stimulation and subsequent rest period have a very specific time course which shows a remarkable resemblance to auditory evoked BOLD responses commonly observed in human subjects. This observation indicates that auditory fMRI in the songbird may establish a link between auditory related neuro-imaging studies done in humans and the large body of neuro-ethological research on song learning and neuro-plasticity performed in songbirds.

  19. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  20. Nanoparticle optical notch filters

    Science.gov (United States)

    Kasinadhuni, Pradeep Kumar

    Developing novel light blocking products involves the design of a nanoparticle optical notch filter, working on the principle of localized surface plasmon resonance (LSPR). These light blocking products can be used in many applications. One such application is to naturally reduce migraine headaches and light sensitivity. Melanopsin ganglion cells present in the retina of the human eye, connect to the suprachiasmatic nucleus (SCN-the body's clock) in the brain, where they participate in the entrainment of the circadian rhythms. As the Melanopsin ganglion cells are involved in triggering the migraine headaches in photophobic patients, it is necessary to block the part of visible spectrum that activates these cells. It is observed from the action potential spectrum of the ganglion cells that they absorb light ranging from 450-500nm (blue-green part) of the visible spectrum with a λmax (peak sensitivity) of around 480nm (blue line). Currently prescribed for migraine patients is the FL-41 coating, which blocks a broad range of wavelengths, including wavelengths associated with melanopsin absorption. The nanoparticle optical notch filter is designed to block light only at 480nm, hence offering an effective prescription for the treatment of migraine headaches.

  1. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    Science.gov (United States)

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  2. Functionally Specific Oscillatory Activity Correlates between Visual and Auditory Cortex in the Blind

    Science.gov (United States)

    Schepers, Inga M.; Hipp, Joerg F.; Schneider, Till R.; Roder, Brigitte; Engel, Andreas K.

    2012-01-01

    Many studies have shown that the visual cortex of blind humans is activated in non-visual tasks. However, the electrophysiological signals underlying this cross-modal plasticity are largely unknown. Here, we characterize the neuronal population activity in the visual and auditory cortex of congenitally blind humans and sighted controls in a…

  3. Auditory hallucinations in nonverbal quadriplegics.

    Science.gov (United States)

    Hamilton, J

    1985-11-01

    When a system for communicating with nonverbal, quadriplegic, institutionalized residents was developed, it was discovered that many were experiencing auditory hallucinations. Nine cases are presented in this study. The "voices" described have many similar characteristics, the primary one being that they give authoritarian commands that tell the residents how to behave and to which the residents feel compelled to respond. Both the relationship of this phenomenon to the theoretical work of Julian Jaynes and its effect on the lives of the residents are discussed.

  4. Autosomal recessive hereditary auditory neuropathy

    Institute of Scientific and Technical Information of China (English)

    王秋菊; 顾瑞; 曹菊阳

    2003-01-01

    Objectives: Auditory neuropathy (AN) is a sensorineural hearing disorder characterized by absent or abnormal auditory brainstem responses (ABRs) and normal cochlear outer hair cell function as measured by otoacoustic emissions (OAEs). Many risk factors are thought to be involved in its etiology and pathophysiology. Three Chinese pedigrees with familial AN are presented herein to demonstrate involvement of genetic factors in AN etiology. Methods: Probands of the above - mentioned pedigrees, who had been diagnosed with AN, were evaluated and followed up in the Department of Otolaryngology Head and Neck Surgery, China PLA General Hospital. Their family members were studied and the pedigree diagrams were established. History of illness, physical examination,pure tone audiometry, acoustic reflex, ABRs and transient evoked and distortion- product otoacoustic emissions (TEOAEs and DPOAEs) were obtained from members of these families. DPOAE changes under the influence of contralateral sound stimuli were observed by presenting a set of continuous white noise to the non - recording ear to exam the function of auditory efferent system. Some subjects received vestibular caloric test, computed tomography (CT)scan of the temporal bone and electrocardiography (ECG) to exclude other possible neuropathy disorders. Results: In most affected subjects, hearing loss of various degrees and speech discrimination difficulties started at 10 to16 years of age. Their audiological evaluation showed absence of acoustic reflex and ABRs. As expected in AN, these subjects exhibited near normal cochlear outer hair cell function as shown in TEOAE & DPOAE recordings. Pure- tone audiometry revealed hearing loss ranging from mild to severe in these patients. Autosomal recessive inheritance patterns were observed in the three families. In Pedigree Ⅰ and Ⅱ, two affected brothers were found respectively, while in pedigree Ⅲ, 2 sisters were affected. All the patients were otherwise normal without

  5. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  6. Dissection of the Auditory Bulla in Postnatal Mice: Isolation of the Middle Ear Bones and Histological Analysis.

    Science.gov (United States)

    Sakamoto, Ayako; Kuroda, Yukiko; Kanzaki, Sho; Matsuo, Koichi

    2017-01-04

    In most mammals, auditory ossicles in the middle ear, including the malleus, incus and stapes, are the smallest bones. In mice, a bony structure called the auditory bulla houses the ossicles, whereas the auditory capsule encloses the inner ear, namely the cochlea and semicircular canals. Murine ossicles are essential for hearing and thus of great interest to researchers in the field of otolaryngology, but their metabolism, development, and evolution are highly relevant to other fields. Altered bone metabolism can affect hearing function in adult mice, and various gene-deficient mice show changes in morphogenesis of auditory ossicles in utero. Although murine auditory ossicles are tiny, their manipulation is feasible if one understands their anatomical orientation and 3D structure. Here, we describe how to dissect the auditory bulla and capsule of postnatal mice and then isolate individual ossicles by removing part of the bulla. We also discuss how to embed the bulla and capsule in different orientations to generate paraffin or frozen sections suitable for preparation of longitudinal, horizontal, or frontal sections of the malleus. Finally, we enumerate anatomical differences between mouse and human auditory ossicles. These methods would be useful in analyzing pathological, developmental and evolutionary aspects of auditory ossicles and the middle ear in mice.

  7. Summary of Martian Dust Filtering Challenges and Current Filter Development

    Science.gov (United States)

    O'Hara, William J., IV

    2017-01-01

    Traditional air particulate filtering in manned spaceflight (Apollo, Shuttle, ISS, etc.) has used cleanable or replaceable catch filters such as screens and High-Efficiency Particulate Arrestance (HEPA) filters. However, the human mission to Mars architecture will require a new approach. It is Martian dust that is the particulate of concern but the need also applies to particulates generated by crew. The Mars Exploration Program Analysis Group (MEPAG) high-lighted this concern in its Mars Science, Goals, Objectives, Investigations and Priorities document [7], by saying specifically that one high priority investigation will be to "Test ISRU atmospheric processing systems to measure resilience with respect to dust and other environmental challenge performance parameters that are critical to the design of a full-scale system." By stating this as high priority the MEPAG is acknowledging that developing and adequately verifying this capability is critical to success of a human mission to Mars. This architecture will require filtering capabilities that are highly reliable, will not restrict the flow path with clogging, and require little to no maintenance. This paper will summarize why this is the case, the general requirements for developing the technology, and the status of the progress made in this area.

  8. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  9. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  10. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  11. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  12. Passive auditory stimulation improves vision in hemianopia.

    Directory of Open Access Journals (Sweden)

    Jörg Lewald

    Full Text Available UNLABELLED: Techniques employed in rehabilitation of visual field disorders such as hemianopia are usually based on either visual or audio-visual stimulation and patients have to perform a training task. Here we present results from a completely different, novel approach that was based on passive unimodal auditory stimulation. Ten patients with either left or right-sided pure hemianopia (without neglect received one hour of unilateral passive auditory stimulation on either their anopic or their intact side by application of repetitive trains of sound pulses emitted simultaneously via two loudspeakers. Immediately before and after passive auditory stimulation as well as after a period of recovery, patients completed a simple visual task requiring detection of light flashes presented along the horizontal plane in total darkness. The results showed that one-time passive auditory stimulation on the side of the blind, but not of the intact, hemifield of patients with hemianopia induced an improvement in visual detections by almost 100% within 30 min after passive auditory stimulation. This enhancement in performance was reversible and was reduced to baseline 1.5 h later. A non-significant trend of a shift of the visual field border toward the blind hemifield was obtained after passive auditory stimulation. These results are compatible with the view that passive auditory stimulation elicited some activation of the residual visual pathways, which are known to be multisensory and may also be sensitive to unimodal auditory stimuli as were used here. TRIAL REGISTRATION: DRKS00003577.

  13. Bilateral duplication of the internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu [Seoul National University College of Medicine, Department of Radiology, Seoul National University Bundang Hospital, Seongnam-si (Korea); Koo, Ja-Won [Seoul National University College of Medicine, Department of Otolaryngology, Seoul National University Bundang Hospital, Seongnam-si (Korea)

    2007-10-15

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  14. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  15. Generalized Hampel Filters

    Science.gov (United States)

    Pearson, Ronald K.; Neuvo, Yrjö; Astola, Jaakko; Gabbouj, Moncef

    2016-12-01

    The standard median filter based on a symmetric moving window has only one tuning parameter: the window width. Despite this limitation, this filter has proven extremely useful and has motivated a number of extensions: weighted median filters, recursive median filters, and various cascade structures. The Hampel filter is a member of the class of decsion filters that replaces the central value in the data window with the median if it lies far enough from the median to be deemed an outlier. This filter depends on both the window width and an additional tuning parameter t, reducing to the median filter when t=0, so it may be regarded as another median filter extension. This paper adopts this view, defining and exploring the class of generalized Hampel filters obtained by applying the median filter extensions listed above: weighted Hampel filters, recursive Hampel filters, and their cascades. An important concept introduced here is that of an implosion sequence, a signal for which generalized Hampel filter performance is independent of the threshold parameter t. These sequences are important because the added flexibility of the generalized Hampel filters offers no practical advantage for implosion sequences. Partial characterization results are presented for these sequences, as are useful relationships between root sequences for generalized Hampel filters and their median-based counterparts. To illustrate the performance of this filter class, two examples are considered: one is simulation-based, providing a basis for quantitative evaluation of signal recovery performance as a function of t, while the other is a sequence of monthly Italian industrial production index values that exhibits glaring outliers.

  16. Speech perception as complex auditory categorization

    Science.gov (United States)

    Holt, Lori L.

    2002-05-01

    Despite a long and rich history of categorization research in cognitive psychology, very little work has addressed the issue of complex auditory category formation. This is especially unfortunate because the general underlying cognitive and perceptual mechanisms that guide auditory category formation are of great importance to understanding speech perception. I will discuss a new methodological approach to examining complex auditory category formation that specifically addresses issues relevant to speech perception. This approach utilizes novel nonspeech sound stimuli to gain full experimental control over listeners' history of experience. As such, the course of learning is readily measurable. Results from this methodology indicate that the structure and formation of auditory categories are a function of the statistical input distributions of sound that listeners hear, aspects of the operating characteristics of the auditory system, and characteristics of the perceptual categorization system. These results have important implications for phonetic acquisition and speech perception.

  17. Ceramic water filters impregnated with silver nanoparticles as a point-of-use water-treatment intervention for HIV-positive individuals in Limpopo Province, South Africa: a pilot study of technological performance and human health benefits.

    Science.gov (United States)

    Abebe, Lydia Shawel; Smith, James A; Narkiewicz, Sophia; Oyanedel-Craver, Vinka; Conaway, Mark; Singo, Alukhethi; Amidou, Samie; Mojapelo, Paul; Brant, Julia; Dillingham, Rebecca

    2014-06-01

    Waterborne pathogens present a significant threat to people living with the human immunodeficiency virus (PLWH). This study presents a randomized, controlled trial that evaluates whether a household-level ceramic water filter (CWF) intervention can improve drinking water quality and decrease days of diarrhea in PLWH in rural South Africa. Seventy-four participants were randomized in an intervention group with CWFs and a control group without filters. Participants in the CWF arm received CWFs impregnated with silver nanoparticles and associated safe-storage containers. Water and stool samples were collected at baseline and 12 months. Diarrhea incidence was self-reported weekly for 12 months. The average diarrhea rate in the control group was 0.064 days/week compared to 0.015 days/week in the intervention group (p water and decrease days of diarrhea for PLWH in rural South Africa.

  18. THE EFFECTS OF SALICYLATE ON AUDITORY EVOKED POTENTIAL AMPLITWDE FROM THE AUDITORY CORTEX AND AUDITORY BRAINSTEM

    Institute of Scientific and Technical Information of China (English)

    Brian Sawka; SUN Wei

    2014-01-01

    Tinnitus has often been studied using salicylate in animal models as they are capable of inducing tempo-rary hearing loss and tinnitus. Studies have recently observed enhancement of auditory evoked responses of the auditory cortex (AC) post salicylate treatment which is also shown to be related to tinnitus like behavior in rats. The aim of this study was to observe if enhancements of the AC post salicylate treatment are also present at structures in the brainstem. Four male Sprague Dawley rats with AC implanted electrodes were tested for both AC and auditory brainstem response (ABR) recordings pre and post 250 mg/kg intraperitone-al injections of salicylate. The responses were recorded as the peak to trough amplitudes of P1-N1 (AC), ABR wave V, and ABR waveⅡ. AC responses resulted in statistically significant enhancement of ampli-tude at 2 hours post salicylate with 90 dB stimuli tone bursts of 4, 8, 12, and 20 kHz. Wave V of ABR re-sponses at 90 dB resulted in a statistically significant reduction of amplitude 2 hours post salicylate and a mean decrease of amplitude of 31%for 16 kHz. WaveⅡamplitudes at 2 hours post treatment were signifi-cantly reduced for 4, 12, and 20 kHz stimuli at 90 dB SPL. Our results suggest that the enhancement chang-es of the AC related to salicylate induced tinnitus are generated superior to the level of the inferior colliculus and may originate in the AC.

  19. MST Filterability Tests

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Burket, P. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Duignan, M. R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-03-12

    The Savannah River Site (SRS) is currently treating radioactive liquid waste with the Actinide Removal Process (ARP) and the Modular Caustic Side Solvent Extraction Unit (MCU). The low filter flux through the ARP has limited the rate at which radioactive liquid waste can be treated. Recent filter flux has averaged approximately 5 gallons per minute (gpm). Salt Batch 6 has had a lower processing rate and required frequent filter cleaning. Savannah River Remediation (SRR) has a desire to understand the causes of the low filter flux and to increase ARP/MCU throughput. In addition, at the time the testing started, SRR was assessing the impact of replacing the 0.1 micron filter with a 0.5 micron filter. This report describes testing of MST filterability to investigate the impact of filter pore size and MST particle size on filter flux and testing of filter enhancers to attempt to increase filter flux. The authors constructed a laboratory-scale crossflow filter apparatus with two crossflow filters operating in parallel. One filter was a 0.1 micron Mott sintered SS filter and the other was a 0.5 micron Mott sintered SS filter. The authors also constructed a dead-end filtration apparatus to conduct screening tests with potential filter aids and body feeds, referred to as filter enhancers. The original baseline for ARP was 5.6 M sodium salt solution with a free hydroxide concentration of approximately 1.7 M.3 ARP has been operating with a sodium concentration of approximately 6.4 M and a free hydroxide concentration of approximately 2.5 M. SRNL conducted tests varying the concentration of sodium and free hydroxide to determine whether those changes had a significant effect on filter flux. The feed slurries for the MST filterability tests were composed of simple salts (NaOH, NaNO2, and NaNO3) and MST (0.2 – 4.8 g/L). The feed slurry for the filter enhancer tests contained simulated salt batch 6 supernate, MST, and filter enhancers.

  20. Acquired auditory-visual synesthesia: A window to early cross-modal sensory interactions

    Directory of Open Access Journals (Sweden)

    Pegah Afra

    2009-01-01

    Full Text Available Pegah Afra, Michael Funke, Fumisuke MatsuoDepartment of Neurology, University of Utah, Salt Lake City, UT, USAAbstract: Synesthesia is experienced when sensory stimulation of one sensory modality elicits an involuntary sensation in another sensory modality. Auditory-visual synesthesia occurs when auditory stimuli elicit visual sensations. It has developmental, induced and acquired varieties. The acquired variety has been reported in association with deafferentation of the visual system as well as temporal lobe pathology with intact visual pathways. The induced variety has been reported in experimental and post-surgical blindfolding, as well as intake of hallucinogenic or psychedelics. Although in humans there is no known anatomical pathway connecting auditory areas to primary and/or early visual association areas, there is imaging and neurophysiologic evidence to the presence of early cross modal interactions between the auditory and visual sensory pathways. Synesthesia may be a window of opportunity to study these cross modal interactions. Here we review the existing literature in the acquired and induced auditory-visual synesthesias and discuss the possible neural mechanisms.Keywords: synesthesia, auditory-visual, cross modal

  1. Mechanisms of enhancing visual-speech recognition by prior auditory information.

    Science.gov (United States)

    Blank, Helen; von Kriegstein, Katharina

    2013-01-15

    Speech recognition from visual-only faces is difficult, but can be improved by prior information about what is said. Here, we investigated how the human brain uses prior information from auditory speech to improve visual-speech recognition. In a functional magnetic resonance imaging study, participants performed a visual-speech recognition task, indicating whether the word spoken in visual-only videos matched the preceding auditory-only speech, and a control task (face-identity recognition) containing exactly the same stimuli. We localized a visual-speech processing network by contrasting activity during visual-speech recognition with the control task. Within this network, the left posterior superior temporal sulcus (STS) showed increased activity and interacted with auditory-speech areas if prior information from auditory speech did not match the visual speech. This mismatch-related activity and the functional connectivity to auditory-speech areas were specific for speech, i.e., they were not present in the control task. The mismatch-related activity correlated positively with performance, indicating that posterior STS was behaviorally relevant for visual-speech recognition. In line with predictive coding frameworks, these findings suggest that prediction error signals are produced if visually presented speech does not match the prediction from preceding auditory speech, and that this mechanism plays a role in optimizing visual-speech recognition by prior information. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Attention fine-tunes auditory-motor processing of speech sounds.

    Science.gov (United States)

    Möttönen, Riikka; van de Ven, Gido M; Watkins, Kate E

    2014-03-12

    The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.

  3. Diminished Auditory Responses during NREM Sleep Correlate with the Hierarchy of Language Processing.

    Directory of Open Access Journals (Sweden)

    Meytal Wilf

    Full Text Available Natural sleep provides a powerful model system for studying the neuronal correlates of awareness and state changes in the human brain. To quantitatively map the nature of sleep-induced modulations in sensory responses we presented participants with auditory stimuli possessing different levels of linguistic complexity. Ten participants were scanned using functional magnetic resonance imaging (fMRI during the waking state and after falling asleep. Sleep staging was based on heart rate measures validated independently on 20 participants using concurrent EEG and heart rate measurements and the results were confirmed using permutation analysis. Participants were exposed to three types of auditory stimuli: scrambled sounds, meaningless word sentences and comprehensible sentences. During non-rapid eye movement (NREM sleep, we found diminishing brain activation along the hierarchy of language processing, more pronounced in higher processing regions. Specifically, the auditory thalamus showed similar activation levels during sleep and waking states, primary auditory cortex remained activated but showed a significant reduction in auditory responses during sleep, and the high order language-related representation in inferior frontal gyrus (IFG cortex showed a complete abolishment of responses during NREM sleep. In addition to an overall activation decrease in language processing regions in superior temporal gyrus and IFG, those areas manifested a loss of semantic selectivity during NREM sleep. Our results suggest that the decreased awareness to linguistic auditory stimuli during NREM sleep is linked to diminished activity in high order processing stations.

  4. Relating binaural pitch perception to the individual listener's auditory profile

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    The ability of eight normal-hearing listeners and fourteen listeners with sensorineural hearing loss to detect and identify pitch contours was measured for binaural-pitch stimuli and salience-matched monaurally detectable pitches. In an effort to determine whether impaired binaural pitch perception...... were found not to perceive binaural pitch at all, despite a clear detection of monaural pitch. While both binaural and monaural pitches were detectable by all other listeners, identification scores were significantly lower for binaural than for monaural pitch. A total absence of binaural pitch...... sensation coexisted with a loss of a binaural signal-detection advantage in noise, without implying reduced cognitive function. Auditory filter bandwidths did not correlate with the difference in pitch identification scores between binaural and monaural pitches. However, subjects with impaired binaural...

  5. Sistema visual humano: evidência psicofísica para filtros de freqüência angular baixa Human visual system: psychophysical evidence for low angular frequency filters

    Directory of Open Access Journals (Sweden)

    Natanael Antonio dos Santos

    2005-04-01

    Full Text Available O objetivo deste trabalho foi mensurar curvas de resposta ao contraste para os filtros de freqüências angulares de banda estreita de 1, 2, 3 e 4 ciclos/360º. Foram estimadas nove curvas para cada filtro com o método psicofísico de somação de resposta de supralimiar aliado ao método da escolha forçada. Tomaram parte neste experimento cinco participantes adultos com acuidade visual normal ou corrigida. Os resultados demonstraram somações máximas de limiar de contraste na freqüência de teste dos filtros de 1, 2, 3 e 4 ciclos/360º circundadas por inibições nas freqüências vizinhas às freqüências angulares de teste de cada filtro. Estes resultados são consistentes com a existência de filtros de freqüências angulares de banda estreita operando no sistema visual humano através do processo de somação ou inibição na faixa de freqüências angular baixa.The aim of this work was to measure narrow-band frequency response curves for four angular frequency filters whose test frequencies were 1, 2, 3 and 4 cycles/360º. Five human observers with normal or corrected visual acuity measured nine curves for each filter with a supra-threshold response summation psychophysical method allied within a forced-choice method. The results showed maximum summation effects at test frequency for filters 1, 2, 3 and 4 cycles/360º surrounded on both sides by strong inhibition. These results are consistent with the existence of narrow-band angular frequency filters operating in human visual system either through summation or inhibition of specific low angular frequency ranges.

  6. Matching Pursuit Analysis of Auditory Receptive Fields' Spectro-Temporal Properties

    Science.gov (United States)

    Bach, Jörg-Hendrik; Kollmeier, Birger; Anemüller, Jörn

    2017-01-01

    Gabor filters have long been proposed as models for spectro-temporal receptive fields (STRFs), with their specific spectral and temporal rate of modulation qualitatively replicating characteristics of STRF filters estimated from responses to auditory stimuli in physiological data. The present study builds on the Gabor-STRF model by proposing a methodology to quantitatively decompose STRFs into a set of optimally matched Gabor filters through matching pursuit, and by quantitatively evaluating spectral and temporal characteristics of STRFs in terms of the derived optimal Gabor-parameters. To summarize a neuron's spectro-temporal characteristics, we introduce a measure for the “diagonality,” i.e., the extent to which an STRF exhibits spectro-temporal transients which cannot be factorized into a product of a spectral and a temporal modulation. With this methodology, it is shown that approximately half of 52 analyzed zebra finch STRFs can each be well approximated by a single Gabor or a linear combination of two Gabor filters. Moreover, the dominant Gabor functions tend to be oriented either in the spectral or in the temporal direction, with truly “diagonal” Gabor functions rarely being necessary for reconstruction of an STRF's main characteristics. As a toy example for the applicability of STRF and Gabor-STRF filters to auditory detection tasks, we use STRF filters as features in an automatic event detection task and compare them to idealized Gabor filters and mel-frequency cepstral coefficients (MFCCs). STRFs classify a set of six everyday sounds with an accuracy similar to reference Gabor features (94% recognition rate). Spectro-temporal STRF and Gabor features outperform reference spectral MFCCs in quiet and in low noise conditions (down to 0 dB signal to noise ratio). PMID:28232791

  7. Functional dissociation between regularity encoding and deviance detection along the auditory hierarchy.

    Science.gov (United States)

    Aghamolaei, Maryam; Zarnowiec, Katarzyna; Grimm, Sabine; Escera, Carles

    2016-02-01

    Auditory deviance detection based on regularity encoding appears as one of the basic functional properties of the auditory system. It has traditionally been assessed with the mismatch negativity (MMN) long-latency component of the auditory evoked potential (AEP). Recent studies have found earlier correlates of deviance detection based on regularity encoding. They occur in humans in the first 50 ms after sound onset, at the level of the middle-latency response of the AEP, and parallel findings of stimulus-specific adaptation observed in animal studies. However, the functional relationship between these different levels of regularity encoding and deviance detection along the auditory hierarchy has not yet been clarified. Here we addressed this issue by examining deviant-related responses at different levels of the auditory hierarchy to stimulus changes varying in their degree of deviation regarding the spatial location of a repeated standard stimulus. Auditory stimuli were presented randomly from five loudspeakers at azimuthal angles of 0°, 12°, 24°, 36° and 48° during oddball and reversed-oddball conditions. Middle-latency responses and MMN were measured. Our results revealed that middle-latency responses were sensitive to deviance but not the degree of deviation, wher