WorldWideScience

Sample records for auditory stream formation

  1. Animal models for auditory streaming

    Science.gov (United States)

    Itatani, Naoya

    2017-01-01

    Sounds in the natural environment need to be assigned to acoustic sources to evaluate complex auditory scenes. Separating sources will affect the analysis of auditory features of sounds. As the benefits of assigning sounds to specific sources accrue to all species communicating acoustically, the ability for auditory scene analysis is widespread among different animals. Animal studies allow for a deeper insight into the neuronal mechanisms underlying auditory scene analysis. Here, we will review the paradigms applied in the study of auditory scene analysis and streaming of sequential sounds in animal models. We will compare the psychophysical results from the animal studies to the evidence obtained in human psychophysics of auditory streaming, i.e. in a task commonly used for measuring the capability for auditory scene analysis. Furthermore, the neuronal correlates of auditory streaming will be reviewed in different animal models and the observations of the neurons’ response measures will be related to perception. The across-species comparison will reveal whether similar demands in the analysis of acoustic scenes have resulted in similar perceptual and neuronal processing mechanisms in the wide range of species being capable of auditory scene analysis. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044022

  2. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  3. The Effect of Working Memory Training on Auditory Stream Segregation in Auditory Processing Disorders Children

    OpenAIRE

    Abdollah Moossavi; Saeideh Mehrkian; Yones Lotfi; Soghrat Faghih zadeh; Hamed Adjedi

    2015-01-01

    Objectives: This study investigated the efficacy of working memory training for improving working memory capacity and related auditory stream segregation in auditory processing disorders children. Methods: Fifteen subjects (9-11 years), clinically diagnosed with auditory processing disorder participated in this non-randomized case-controlled trial. Working memory abilities and auditory stream segregation were evaluated prior to beginning and six weeks after completing the training program...

  4. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  5. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences

    Directory of Open Access Journals (Sweden)

    Stanislava Knyazeva

    2018-01-01

    Full Text Available This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman’s population separation model of auditory streaming.

  6. Neuronal Correlates of Auditory Streaming in Monkey Auditory Cortex for Tone Sequences without Spectral Differences.

    Science.gov (United States)

    Knyazeva, Stanislava; Selezneva, Elena; Gorkin, Alexander; Aggelopoulos, Nikolaos C; Brosch, Michael

    2018-01-01

    This study finds a neuronal correlate of auditory perceptual streaming in the primary auditory cortex for sequences of tone complexes that have the same amplitude spectrum but a different phase spectrum. Our finding is based on microelectrode recordings of multiunit activity from 270 cortical sites in three awake macaque monkeys. The monkeys were presented with repeated sequences of a tone triplet that consisted of an A tone, a B tone, another A tone and then a pause. The A and B tones were composed of unresolved harmonics formed by adding the harmonics in cosine phase, in alternating phase, or in random phase. A previous psychophysical study on humans revealed that when the A and B tones are similar, humans integrate them into a single auditory stream; when the A and B tones are dissimilar, humans segregate them into separate auditory streams. We found that the similarity of neuronal rate responses to the triplets was highest when all A and B tones had cosine phase. Similarity was intermediate when the A tones had cosine phase and the B tones had alternating phase. Similarity was lowest when the A tones had cosine phase and the B tones had random phase. The present study corroborates and extends previous reports, showing similar correspondences between neuronal activity in the primary auditory cortex and auditory streaming of sound sequences. It also is consistent with Fishman's population separation model of auditory streaming.

  7. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder

    Directory of Open Access Journals (Sweden)

    Yones Lotfi

    2016-03-01

    Full Text Available Background: This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD. Methods: The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9–11 years according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. Results: The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth and lower negative correlations in the most lateral reference location (60° azimuth in the children with APD. Conclusion: The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  9. Relation between Working Memory Capacity and Auditory Stream Segregation in Children with Auditory Processing Disorder.

    Science.gov (United States)

    Lotfi, Yones; Mehrkian, Saiedeh; Moossavi, Abdollah; Zadeh, Soghrat Faghih; Sadjedi, Hamed

    2016-03-01

    This study assessed the relationship between working memory capacity and auditory stream segregation by using the concurrent minimum audible angle in children with a diagnosed auditory processing disorder (APD). The participants in this cross-sectional, comparative study were 20 typically developing children and 15 children with a diagnosed APD (age, 9-11 years) according to the subtests of multiple-processing auditory assessment. Auditory stream segregation was investigated using the concurrent minimum audible angle. Working memory capacity was evaluated using the non-word repetition and forward and backward digit span tasks. Nonparametric statistics were utilized to compare the between-group differences. The Pearson correlation was employed to measure the degree of association between working memory capacity and the localization tests between the 2 groups. The group with APD had significantly lower scores than did the typically developing subjects in auditory stream segregation and working memory capacity. There were significant negative correlations between working memory capacity and the concurrent minimum audible angle in the most frontal reference location (0° azimuth) and lower negative correlations in the most lateral reference location (60° azimuth) in the children with APD. The study revealed a relationship between working memory capacity and auditory stream segregation in children with APD. The research suggests that lower working memory capacity in children with APD may be the possible cause of the inability to segregate and group incoming information.

  10. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  11. Auditory evoked potentials to abrupt pitch and timbre change of complex tones: electrophysiological evidence of 'streaming'?

    Science.gov (United States)

    Jones, S J; Longe, O; Vaz Pato, M

    1998-03-01

    Examination of the cortical auditory evoked potentials to complex tones changing in pitch and timbre suggests a useful new method for investigating higher auditory processes, in particular those concerned with 'streaming' and auditory object formation. The main conclusions were: (i) the N1 evoked by a sudden change in pitch or timbre was more posteriorly distributed than the N1 at the onset of the tone, indicating at least partial segregation of the neuronal populations responsive to sound onset and spectral change; (ii) the T-complex was consistently larger over the right hemisphere, consistent with clinical and PET evidence for particular involvement of the right temporal lobe in the processing of timbral and musical material; (iii) responses to timbral change were relatively unaffected by increasing the rate of interspersed changes in pitch, suggesting a mechanism for detecting the onset of a new voice in a constantly modulated sound stream; (iv) responses to onset, offset and pitch change of complex tones were relatively unaffected by interfering tones when the latter were of a different timbre, suggesting these responses must be generated subsequent to auditory stream segregation.

  12. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    Martin eBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  13. Auditory and phonetic category formation

    NARCIS (Netherlands)

    Goudbeek, Martijn; Cutler, A.; Smits, R.; Swingley, D.; Cohen, Henri; Lefebvre, Claire

    2017-01-01

    Among infants' first steps in language acquisition is learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Research in the visual modality has shown that for adults, some kinds

  14. Auditory stream segregation using amplitude modulated bandpass noise

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2015-08-01

    Full Text Available The purpose of this study was to investigate the roles of spectral overlap and amplitude modulation (AM rate for stream segregation for noise signals, as well as to test the build-up effect based on these two cues. Segregation ability was evaluated using an objective paradigm with listeners’ attention focused on stream segregation. Stimulus sequences consisted of two interleaved sets of bandpass noise bursts (A and B bursts. The A and B bursts differed in spectrum, AM-rate, or both. The amount of the difference between the two sets of noise bursts was varied. Long and short sequences were studied to investigate the build-up effect for segregation based on spectral and AM-rate differences. Results showed the following: 1. Stream segregation ability increased with greater spectral separation. 2. Larger AM-rate separations were associated with stronger segregation abilities. 3. Spectral separation was found to elicit the build-up effect for the range of spectral differences assessed in the current study. 4. AM-rate separation interacted with spectral separation suggesting an additive effect of spectral separation and AM-rate separation on segregation build-up. The findings suggest that, when normal-hearing listeners direct their attention toward segregation, they are able to segregate auditory streams based on reduced spectral contrast cues that vary by the amount of spectral overlap. Further, regardless of the spectral separation they were able to use AM-rate difference as a secondary/weaker cue. Based on the spectral differences, listeners can segregate auditory streams better as the listening duration is prolonged—i.e. sparse spectral cues elicit build-up segregation; however, AM-rate differences only appear to elicit build-up when in combination with spectral difference cues.

  15. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  16. Automatic phoneme category selectivity in the dorsal auditory stream.

    Science.gov (United States)

    Chevillet, Mark A; Jiang, Xiong; Rauschecker, Josef P; Riesenhuber, Maximilian

    2013-03-20

    Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.

  17. Auditory Stream Segregation in Autism Spectrum Disorder: Benefits and Downsides of Superior Perceptual Processes

    Science.gov (United States)

    Bouvet, Lucie; Mottron, Laurent; Valdois, Sylviane; Donnadieu, Sophie

    2016-01-01

    Auditory stream segregation allows us to organize our sound environment, by focusing on specific information and ignoring what is unimportant. One previous study reported difficulty in stream segregation ability in children with Asperger syndrome. In order to investigate this question further, we used an interleaved melody recognition task with…

  18. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Karsten eSpecht

    2013-01-01

    Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus...

  19. Mapping a lateralization gradient within the ventral stream for auditory speech perception

    OpenAIRE

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus....

  20. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  1. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔF TONE , TONE condition) but also in the amplitude modulation rate ("AM cue": ΔF AM , AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔF AM and ΔF TONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Segregation and integration of auditory streams when listening to multi-part music.

    Science.gov (United States)

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams

  3. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  4. Auditory Stream Segregation Improves Infants' Selective Attention to Target Tones Amid Distracters

    Science.gov (United States)

    Smith, Nicholas A.; Trainor, Laurel J.

    2011-01-01

    This study examined the role of auditory stream segregation in the selective attention to target tones in infancy. Using a task adapted from Bregman and Rudnicky's 1975 study and implemented in a conditioned head-turn procedure, infant and adult listeners had to discriminate the temporal order of 2,200 and 2,400 Hz target tones presented alone,…

  5. Membrane potential dynamics of populations of cortical neurons during auditory streaming

    Science.gov (United States)

    Farley, Brandon J.

    2015-01-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558

  6. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  7. Effects of tonotopicity, adaptation, modulation tuning, and temporal coherence in “primitive” auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2014-01-01

    ., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... asynchrony of spectral components, facilitating the listeners’ ability to segregate temporally overlapping sounds into separate auditory objects. Overall, the modeling framework may be useful to study the contributions of bottom-up auditory features on “primitive” grouping, also in more complex acoustic...

  8. Automaticity and primacy of auditory streaming: Concurrent subjective and objective measures.

    Science.gov (United States)

    Billig, Alexander J; Carlyon, Robert P

    2016-03-01

    Two experiments used subjective and objective measures to study the automaticity and primacy of auditory streaming. Listeners heard sequences of "ABA-" triplets, where "A" and "B" were tones of different frequencies and "-" was a silent gap. Segregation was more frequently reported, and rhythmically deviant triplets less well detected, for a greater between-tone frequency separation and later in the sequence. In Experiment 1, performing a competing auditory task for the first part of the sequence led to a reduction in subsequent streaming compared to when the tones were attended throughout. This is consistent with focused attention promoting streaming, and/or with attention switches resetting it. However, the proportion of segregated reports increased more rapidly following a switch than at the start of a sequence, indicating that some streaming occurred automatically. Modeling ruled out a simple "covert attention" account of this finding. Experiment 2 required listeners to perform subjective and objective tasks concurrently. It revealed superior performance during integrated compared to segregated reports, beyond that explained by the codependence of the two measures on stimulus parameters. We argue that listeners have limited access to low-level stimulus representations once perceptual organization has occurred, and that subjective and objective streaming measures partly index the same processes. (c) 2016 APA, all rights reserved).

  9. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  10. Mapping a lateralisation gradient within the ventral stream for auditory speech perception

    Directory of Open Access Journals (Sweden)

    Karsten eSpecht

    2013-10-01

    Full Text Available Recent models on speech perception propose a dual stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend towards the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic imaging (fMRI studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesised, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a lateralisation gradient. This increasing leftward lateralisation was particularly evident for the left superior temporal sulcus (STS and more anterior parts of the temporal lobe.

  11. Mapping a lateralization gradient within the ventral stream for auditory speech perception.

    Science.gov (United States)

    Specht, Karsten

    2013-01-01

    Recent models on speech perception propose a dual-stream processing network, with a dorsal stream, extending from the posterior temporal lobe of the left hemisphere through inferior parietal areas into the left inferior frontal gyrus, and a ventral stream that is assumed to originate in the primary auditory cortex in the upper posterior part of the temporal lobe and to extend toward the anterior part of the temporal lobe, where it may connect to the ventral part of the inferior frontal gyrus. This article describes and reviews the results from a series of complementary functional magnetic resonance imaging studies that aimed to trace the hierarchical processing network for speech comprehension within the left and right hemisphere with a particular focus on the temporal lobe and the ventral stream. As hypothesized, the results demonstrate a bilateral involvement of the temporal lobes in the processing of speech signals. However, an increasing leftward asymmetry was detected from auditory-phonetic to lexico-semantic processing and along the posterior-anterior axis, thus forming a "lateralization" gradient. This increasing leftward lateralization was particularly evident for the left superior temporal sulcus and more anterior parts of the temporal lobe.

  12. Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres.

    Science.gov (United States)

    Fiveash, Anna; Thompson, William Forde; Badcock, Nicholas A; McArthur, Genevieve

    2018-07-01

    Music and language both rely on the processing of spectral (pitch, timbre) and temporal (rhythm) information to create structure and meaning from incoming auditory streams. Behavioral results have shown that interrupting a melodic stream with unexpected changes in timbre leads to reduced syntactic processing. Such findings suggest that syntactic processing is conditional on successful streaming of incoming sequential information. The current study used event-related potentials (ERPs) to investigate whether (1) the effect of alternating timbres on syntactic processing is reflected in a reduced brain response to syntactic violations, and (2) the phenomenon is similar for music and language. Participants listened to melodies and sentences with either one timbre (piano or one voice) or three timbres (piano, guitar, and vibraphone, or three different voices). Half the stimuli contained syntactic violations: an out-of-key note in the melodies, and a phrase-structure violation in the sentences. We found smaller ERPs to syntactic violations in music in the three-timbre compared to the one-timbre condition, reflected in a reduced early right anterior negativity (ERAN). A similar but non-significant pattern was observed for language stimuli in both the early left anterior negativity (ELAN) and the left anterior negativity (LAN) ERPs. The results suggest that disruptions to auditory streaming may interfere with syntactic processing, especially for melodic sequences. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  14. Probing neural mechanisms underlying auditory stream segregation in humans by transcranial direct current stimulation (tDCS).

    Science.gov (United States)

    Deike, Susann; Deliano, Matthias; Brechmann, André

    2016-10-01

    One hypothesis concerning the neural underpinnings of auditory streaming states that frequency tuning of tonotopically organized neurons in primary auditory fields in combination with physiological forward suppression is necessary for the separation of representations of high-frequency A and low-frequency B tones. The extent of spatial overlap between the tonotopic activations of A and B tones is thought to underlie the perceptual organization of streaming sequences into one coherent or two separate streams. The present study attempts to interfere with these mechanisms by transcranial direct current stimulation (tDCS) and to probe behavioral outcomes reflecting the perception of ABAB streaming sequences. We hypothesized that tDCS by modulating cortical excitability causes a change in the separateness of the representations of A and B tones, which leads to a change in the proportions of one-stream and two-stream percepts. To test this, 22 subjects were presented with ambiguous ABAB sequences of three different frequency separations (∆F) and had to decide on their current percept after receiving sham, anodal, or cathodal tDCS over the left auditory cortex. We could confirm our hypothesis at the most ambiguous ∆F condition of 6 semitones. For anodal compared with sham and cathodal stimulation, we found a significant decrease in the proportion of two-stream perception and an increase in the proportion of one-stream perception. The results demonstrate the feasibility of using tDCS to probe mechanisms underlying auditory streaming through the use of various behavioral measures. Moreover, this approach allows one to probe the functions of auditory regions and their interactions with other processing stages. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Binding and unbinding the auditory and visual streams in the McGurk effect.

    Science.gov (United States)

    Nahorna, Olha; Berthommier, Frédéric; Schwartz, Jean-Luc

    2012-08-01

    Subjects presented with coherent auditory and visual streams generally fuse them into a single percept. This results in enhanced intelligibility in noise, or in visual modification of the auditory percept in the McGurk effect. It is classically considered that processing is done independently in the auditory and visual systems before interaction occurs at a certain representational stage, resulting in an integrated percept. However, some behavioral and neurophysiological data suggest the existence of a two-stage process. A first stage would involve binding together the appropriate pieces of audio and video information before fusion per se in a second stage. Then it should be possible to design experiments leading to unbinding. It is shown here that if a given McGurk stimulus is preceded by an incoherent audiovisual context, the amount of McGurk effect is largely reduced. Various kinds of incoherent contexts (acoustic syllables dubbed on video sentences or phonetic or temporal modifications of the acoustic content of a regular sequence of audiovisual syllables) can significantly reduce the McGurk effect even when they are short (less than 4 s). The data are interpreted in the framework of a two-stage "binding and fusion" model for audiovisual speech perception.

  16. Auditory stream formation affects comodulation masking release retroactively

    DEFF Research Database (Denmark)

    Dau, Torsten; Ewert, Stephan; Oxenham, A. J.

    2009-01-01

    . Detection thresholds for a 1-kHz sinusoidal signal were measured in the presence of a narrowband (20-Hz-wide) on-frequency masker with or without four comodulated or independent flanking bands that were spaced apart by either 1/6 (narrow spacing) or 1 octave (wide spacing). As expected, CMR was observed...

  17. Selective Entrainment of Theta Oscillations in the Dorsal Stream Causally Enhances Auditory Working Memory Performance.

    Science.gov (United States)

    Albouy, Philippe; Weiss, Aurélien; Baillet, Sylvain; Zatorre, Robert J

    2017-04-05

    The implication of the dorsal stream in manipulating auditory information in working memory has been recently established. However, the oscillatory dynamics within this network and its causal relationship with behavior remain undefined. Using simultaneous MEG/EEG, we show that theta oscillations in the dorsal stream predict participants' manipulation abilities during memory retention in a task requiring the comparison of two patterns differing in temporal order. We investigated the causal relationship between brain oscillations and behavior by applying theta-rhythmic TMS combined with EEG over the MEG-identified target (left intraparietal sulcus) during the silent interval between the two stimuli. Rhythmic TMS entrained theta oscillation and boosted participants' accuracy. TMS-induced oscillatory entrainment scaled with behavioral enhancement, and both gains varied with participants' baseline abilities. These effects were not seen for a melody-comparison control task and were not observed for arrhythmic TMS. These data establish theta activity in the dorsal stream as causally related to memory manipulation. VIDEO ABSTRACT. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. An Objective Measurement of the Build-Up of Auditory Streaming and of Its Modulation by Attention

    Science.gov (United States)

    Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri

    2011-01-01

    Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…

  19. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  20. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  1. Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface

    Directory of Open Access Journals (Sweden)

    N. Jeremy Hill

    2012-12-01

    Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely

  2. Communication and control by listening: toward optimal design of a two-class auditory streaming brain-computer interface.

    Science.gov (United States)

    Hill, N Jeremy; Moinuddin, Aisha; Häuser, Ann-Katrin; Kienzle, Stephan; Schalk, Gerwin

    2012-01-01

    Most brain-computer interface (BCI) systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two simultaneously presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously published variants provides superior performance: a fixed-phase (FP) design in which the streams have equal period and opposite phase, or a drifting-phase (DP) design where the periods are unequal. We found FP to be superior to DP (p = 0.002): average performance levels were 80 and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one's eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely paralyzed users.

  3. Wireless network interface energy consumption implications of popular streaming formats

    Science.gov (United States)

    Chandra, Surendar

    2001-12-01

    With the proliferation of mobile streaming multimedia, available battery capacity constrains the end-user experience. Since streaming applications tend to be long running, wireless network interface card's (WNIC) energy consumption is particularly an acute problem. In this work, we explore the WNIC energy consumption implications of popular multimedia streaming formats from Microsoft (Windows media), Real (Real media) and Apple (Quick Time). We investigate the energy consumption under varying stream bandwidth and network loss rates. We also explore history-based client-side strategies to reduce the energy consumed by transitioning the WNICs to a lower power consuming sleep state. We show that Microsoft media tends to transmit packets at regular intervals; streams optimized for 28.8 Kbps can save over 80% in energy consumption with 2% data loss. A high bandwidth stream (768 Kbps) can still save 57% in energy consumption with less than 0.3% data loss. For high bandwidth streams, Microsoft media exploits network-level packet fragmentation, which can lead to excessive packet loss (and wasted energy) in a lossy network. Real stream packets tend to be sent closer to each other, especially at higher bandwidths. Quicktime packets sometimes arrive in quick succession; most likely an application level fragmentation mechanism. Such packets are harder to predict at the network level without understanding the packet semantics.

  4. Event-related brain potential correlates of human auditory sensory memory-trace formation.

    Science.gov (United States)

    Haenschel, Corinna; Vernon, David J; Dwivedi, Prabuddh; Gruzelier, John H; Baldeweg, Torsten

    2005-11-09

    The event-related potential (ERP) component mismatch negativity (MMN) is a neural marker of human echoic memory. MMN is elicited by deviant sounds embedded in a stream of frequent standards, reflecting the deviation from an inferred memory trace of the standard stimulus. The strength of this memory trace is thought to be proportional to the number of repetitions of the standard tone, visible as the progressive enhancement of MMN with number of repetitions (MMN memory-trace effect). However, no direct ERP correlates of the formation of echoic memory traces are currently known. This study set out to investigate changes in ERPs to different numbers of repetitions of standards, delivered in a roving-stimulus paradigm in which the frequency of the standard stimulus changed randomly between stimulus trains. Normal healthy volunteers (n = 40) were engaged in two experimental conditions: during passive listening and while actively discriminating changes in tone frequency. As predicted, MMN increased with increasing number of standards. However, this MMN memory-trace effect was caused mainly by enhancement with stimulus repetition of a slow positive wave from 50 to 250 ms poststimulus in the standard ERP, which is termed here "repetition positivity" (RP). This RP was recorded from frontocentral electrodes when participants were passively listening to or actively discriminating changes in tone frequency. RP may represent a human ERP correlate of rapid and stimulus-specific adaptation, a candidate neuronal mechanism underlying sensory memory formation in the auditory cortex.

  5. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-01-01

    We report on the development and online testing of an EEG-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects’ modulation of N1 and P3 ERP components measured during single 5-second stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare “oddball” stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly-known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention-modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject’s attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology. PMID:22333135

  6. Dynamics of dendritic spines in the mouse auditory cortex during memory formation and memory recall.

    Science.gov (United States)

    Moczulska, Kaja Ewa; Tinter-Thiede, Juliane; Peter, Manuel; Ushakova, Lyubov; Wernle, Tanja; Bathellier, Brice; Rumpel, Simon

    2013-11-05

    Long-lasting changes in synaptic connections induced by relevant experiences are believed to represent the physical correlate of memories. Here, we combined chronic in vivo two-photon imaging of dendritic spines with auditory-cued classical conditioning to test if the formation of a fear memory is associated with structural changes of synapses in the mouse auditory cortex. We find that paired conditioning and unpaired conditioning induce a transient increase in spine formation or spine elimination, respectively. A fraction of spines formed during paired conditioning persists and leaves a long-lasting trace in the network. Memory recall triggered by the reexposure of mice to the sound cue did not lead to changes in spine dynamics. Our findings provide a synaptic mechanism for plasticity in sound responses of auditory cortex neurons induced by auditory-cued fear conditioning; they also show that retrieval of an auditory fear memory does not lead to a recapitulation of structural plasticity in the auditory cortex as observed during initial memory consolidation.

  7. Planetesimal Formation by the Streaming Instability in a Photoevaporating Disk

    Energy Technology Data Exchange (ETDEWEB)

    Carrera, Daniel; Johansen, Anders; Davies, Melvyn B. [Lund Observatory, Dept. of Astronomy and Theoretical Physics, Lund University, Box 43, SE-221 00 Lund (Sweden); Gorti, Uma [NASA Ames Research Center, Moffett Field, CA (United States)

    2017-04-10

    Recent years have seen growing interest in the streaming instability as a candidate mechanism to produce planetesimals. However, these investigations have been limited to small-scale simulations. We now present the results of a global protoplanetary disk evolution model that incorporates planetesimal formation by the streaming instability, along with viscous accretion, photoevaporation by EUV, FUV, and X-ray photons, dust evolution, the water ice line, and stratified turbulence. Our simulations produce massive (60–130 M {sub ⊕}) planetesimal belts beyond 100 au and up to ∼20 M {sub ⊕} of planetesimals in the middle regions (3–100 au). Our most comprehensive model forms 8 M {sub ⊕} of planetesimals inside 3 au, where they can give rise to terrestrial planets. The planetesimal mass formed in the inner disk depends critically on the timing of the formation of an inner cavity in the disk by high-energy photons. Our results show that the combination of photoevaporation and the streaming instability are efficient at converting the solid component of protoplanetary disks into planetesimals. Our model, however, does not form enough early planetesimals in the inner and middle regions of the disk to give rise to giant planets and super-Earths with gaseous envelopes. Additional processes such as particle pileups and mass loss driven by MHD winds may be needed to drive the formation of early planetesimal generations in the planet-forming regions of protoplanetary disks.

  8. Human dorsal and ventral auditory streams subserve rehearsal-based and echoic processes during verbal working memory.

    Science.gov (United States)

    Buchsbaum, Bradley R; Olsen, Rosanna K; Koch, Paul; Berman, Karen Faith

    2005-11-23

    To hear a sequence of words and repeat them requires sensory-motor processing and something more-temporary storage. We investigated neural mechanisms of verbal memory by using fMRI and a task designed to tease apart perceptually based ("echoic") memory from phonological-articulatory memory. Sets of two- or three-word pairs were presented bimodally, followed by a cue indicating from which modality (auditory or visual) items were to be retrieved and rehearsed over a delay. Although delay-period activation in the planum temporale (PT) was insensible to the source modality and showed sustained delay-period activity, the superior temporal gyrus (STG) activated more vigorously when the retrieved items had arrived to the auditory modality and showed transient delay-period activity. Functional connectivity analysis revealed two topographically distinct fronto-temporal circuits, with STG co-activating more strongly with ventrolateral prefrontal cortex and PT co-activating more strongly with dorsolateral prefrontal cortex. These argue for separate contributions of ventral and dorsal auditory streams in verbal working memory.

  9. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music

    Directory of Open Access Journals (Sweden)

    Niels R. Disbergen

    2018-03-01

    Full Text Available Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together, via a task requiring the detection of temporal modulations (i.e., triplets incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s. Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI environment. Experiment 1 subjects (N = 29, non-musicians completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen

  10. Assessing Top-Down and Bottom-Up Contributions to Auditory Stream Segregation and Integration With Polyphonic Music.

    Science.gov (United States)

    Disbergen, Niels R; Valente, Giancarlo; Formisano, Elia; Zatorre, Robert J

    2018-01-01

    Polyphonic music listening well exemplifies processes typically involved in daily auditory scene analysis situations, relying on an interactive interplay between bottom-up and top-down processes. Most studies investigating scene analysis have used elementary auditory scenes, however real-world scene analysis is far more complex. In particular, music, contrary to most other natural auditory scenes, can be perceived by either integrating or, under attentive control, segregating sound streams, often carried by different instruments. One of the prominent bottom-up cues contributing to multi-instrument music perception is their timbre difference. In this work, we introduce and validate a novel paradigm designed to investigate, within naturalistic musical auditory scenes, attentive modulation as well as its interaction with bottom-up processes. Two psychophysical experiments are described, employing custom-composed two-voice polyphonic music pieces within a framework implementing a behavioral performance metric to validate listener instructions requiring either integration or segregation of scene elements. In Experiment 1, the listeners' locus of attention was switched between individual instruments or the aggregate (i.e., both instruments together), via a task requiring the detection of temporal modulations (i.e., triplets) incorporated within or across instruments. Subjects responded post-stimulus whether triplets were present in the to-be-attended instrument(s). Experiment 2 introduced the bottom-up manipulation by adding a three-level morphing of instrument timbre distance to the attentional framework. The task was designed to be used within neuroimaging paradigms; Experiment 2 was additionally validated behaviorally in the functional Magnetic Resonance Imaging (fMRI) environment. Experiment 1 subjects ( N = 29, non-musicians) completed the task at high levels of accuracy, showing no group differences between any experimental conditions. Nineteen listeners also

  11. Differential coding of conspecific vocalizations in the ventral auditory cortical stream.

    Science.gov (United States)

    Fukushima, Makoto; Saunders, Richard C; Leopold, David A; Mishkin, Mortimer; Averbeck, Bruno B

    2014-03-26

    The mammalian auditory cortex integrates spectral and temporal acoustic features to support the perception of complex sounds, including conspecific vocalizations. Here we investigate coding of vocal stimuli in different subfields in macaque auditory cortex. We simultaneously measured auditory evoked potentials over a large swath of primary and higher order auditory cortex along the supratemporal plane in three animals chronically using high-density microelectrocorticographic arrays. To evaluate the capacity of neural activity to discriminate individual stimuli in these high-dimensional datasets, we applied a regularized multivariate classifier to evoked potentials to conspecific vocalizations. We found a gradual decrease in the level of overall classification performance along the caudal to rostral axis. Furthermore, the performance in the caudal sectors was similar across individual stimuli, whereas the performance in the rostral sectors significantly differed for different stimuli. Moreover, the information about vocalizations in the caudal sectors was similar to the information about synthetic stimuli that contained only the spectral or temporal features of the original vocalizations. In the rostral sectors, however, the classification for vocalizations was significantly better than that for the synthetic stimuli, suggesting that conjoined spectral and temporal features were necessary to explain differential coding of vocalizations in the rostral areas. We also found that this coding in the rostral sector was carried primarily in the theta frequency band of the response. These findings illustrate a progression in neural coding of conspecific vocalizations along the ventral auditory pathway.

  12. Neural Activity During The Formation Of A Giant Auditory Synapse

    NARCIS (Netherlands)

    M.C. Sierksma (Martijn)

    2018-01-01

    markdownabstractThe formation of synapses is a critical step in the development of the brain. During this developmental stage neural activity propagates across the brain from synapse to synapse. This activity is thought to instruct the precise, topological connectivity found in the sensory central

  13. The auditory dorsal stream plays a crucial role in projecting hallucinated voices into external space

    NARCIS (Netherlands)

    Looijestijn, Jasper; Diederen, Kelly M. J.; Goekoop, Rutger; Sommer, Iris E. C.; Daalman, Kirstin; Kahn, Rene S.; Hoek, Hans W.; Blom, Jan Dirk

    Introduction: Verbal auditory hallucinations (VAHs) are experienced as spoken voices which seem to originate in the extracorporeal environment or inside the head. Animal and human research has identified a 'where' pathway for sound processing comprising the planum temporale, the middle frontal gyrus

  14. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  15. Glycinergic Pathways of the Central Auditory System and Adjacent Reticular Formation of the Rat.

    Science.gov (United States)

    Hunter, Chyren

    The development of techniques to visualize and identify specific transmitters of neuronal circuits has stimulated work on the characterization of pathways in the rat central nervous system that utilize the inhibitory amino acid glycine as its neurotransmitter. Glycine is a major inhibitory transmitter in the spinal cord and brainstem of vertebrates where it satisfies the major criteria for neurotransmitter action. Some of these characteristics are: uneven distribution in brain, high affinity reuptake mechanisms, inhibitory neurophysiological actions on certain neuronal populations, uneven receptor distribution and the specific antagonism of its actions by the convulsant alkaloid strychnine. Behaviorally, antagonism of glycinergic neurotransmission in the medullary reticular formation is linked to the development of myoclonus and seizures which may be initiated by auditory as well as other stimuli. In the present study, decreases in the concentration of glycine as well as the density of glycine receptors in the medulla with aging were found and may be responsible for the lowered threshold for strychnine seizures observed in older rats. Neuroanatomical pathways in the central auditory system and medullary and pontine reticular formation (RF) were investigated using retrograde transport of tritiated glycine to identify glycinergic pathways; immunohistochemical techniques were used to corroborate the location of glycine neurons. Within the central auditory system, retrograde transport studies using tritiated glycine demonstrated an ipsilateral glycinergic pathway linking nuclei of the ascending auditory system. This pathway has its cell bodies in the medial nucleus of the trapezoid body (MNTB) and projects to the ventrocaudal division of the ventral nucleus of the lateral lemniscus (VLL). Collaterals of this glycinergic projection terminate in the ipsilateral lateral superior olive (LSO). Other glycinergic pathways found were afferent to the VLL and have their origin

  16. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  17. Music Radio as a Format Remediated for the Stream-Based Music Use

    DEFF Research Database (Denmark)

    Ægidius, Andreas Lenander

    What do music radio and music streaming have in common? The curated flow of music. Radio is featured in the main section of the Spotify user interface. Apple employs radio host for their streaming service, Apple Music. Music streaming and music radio seem closely related. Even in their use...... this theoretical contribution with reference to several empirical studies of everyday music streaming use and the fact that radio holds a significant position as both a stand-alone medium and as a contributing format within streaming music use. Why else does Spotify provide radio(s) and Apple Music likewise employ...

  18. Loss and persistence of implicit memory for sound: evidence from auditory stream segregation context effects.

    Science.gov (United States)

    Snyder, Joel S; Weintraub, David M

    2013-07-01

    An important question is the extent to which declines in memory over time are due to passive loss or active interference from other stimuli. The purpose of the present study was to determine the extent to which implicit memory effects in the perceptual organization of sound sequences are subject to loss and interference. Toward this aim, we took advantage of two recently discovered context effects in the perceptual judgments of sound patterns, one that depends on stimulus features of previous sounds and one that depends on the previous perceptual organization of these sounds. The experiments measured how listeners' perceptual organization of a tone sequence (test) was influenced by the frequency separation, or the perceptual organization, of the two preceding sequences (context1 and context2). The results demonstrated clear evidence for loss of context effects over time but little evidence for interference. However, they also revealed that context effects can be surprisingly persistent. The robust effects of loss, followed by persistence, were similar for the two types of context effects. We discuss whether the same auditory memories might contain information about basic stimulus features of sounds (i.e., frequency separation), as well as the perceptual organization of these sounds.

  19. Active versus passive listening to auditory streaming stimuli: a near-infrared spectroscopy study

    Science.gov (United States)

    Remijn, Gerard B.; Kojima, Haruyuki

    2010-05-01

    We use near-infrared spectroscopy (NIRS) to assess listeners' cortical responses to a 10-s series of pure tones separated in frequency. Listeners are instructed to either judge the rhythm of these ``streaming'' stimuli (active-response listening) or to listen to the stimuli passively. Experiment 1 shows that active-response listening causes increases in oxygenated hemoglobin (oxy-Hb) in response to all stimuli, generally over the (pre)motor cortices. The oxy-Hb increases are significantly larger over the right hemisphere than over the left for the final 5 s of the stimulus. Hemodynamic levels do not vary with changes in the frequency separation between the tones and corresponding changes in perceived rhythm (``gallop,'' ``streaming,'' or ``ambiguous''). Experiment 2 shows that hemodynamic levels are strongly influenced by listening mode. For the majority of time windows, active-response listening causes significantly larger oxy-Hb increases than passive listening, significantly over the left hemisphere during the stimulus and over both hemispheres after the stimulus. This difference cannot be attributed to physical motor activity and preparation related to button pressing after stimulus end, because this is required in both listening modes.

  20. Auditory working memory load impairs visual ventral stream processing: toward a unified model of attentional load.

    Science.gov (United States)

    Klemen, Jane; Büchel, Christian; Bühler, Mira; Menz, Mareike M; Rose, Michael

    2010-03-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences that disruptions to attention can have. According to the load theory of cognitive control, processing of task-irrelevant stimuli is increased by attending in parallel to a relevant task with high cognitive demands. This is due to the relevant task engaging cognitive control resources that are, hence, unavailable to inhibit the processing of task-irrelevant stimuli. However, it has also been demonstrated that a variety of types of load (perceptual and emotional) can result in a reduction of the processing of task-irrelevant stimuli, suggesting a uniform effect of increased load irrespective of the type of load. In the present study, we concurrently presented a relevant auditory matching task [n-back working memory (WM)] of low or high cognitive load (1-back or 2-back WM) and task-irrelevant images at one of three object visibility levels (0%, 50%, or 100%). fMRI activation during the processing of the task-irrelevant visual stimuli was measured in the lateral occipital cortex and found to be reduced under high, compared to low, WM load. In combination with previous findings, this result is suggestive of a more generalized load theory, whereby cognitive load, as well as other types of load (e.g., perceptual), can result in a reduction of the processing of task-irrelevant stimuli, in line with a uniform effect of increased load irrespective of the type of load.

  1. Auditory stream segregation with multi-tonal complexes in hearing-impaired listeners

    Science.gov (United States)

    Rogers, Deanna S.; Lentz, Jennifer J.

    2004-05-01

    The ability to segregate sounds into different streams was investigated in normally hearing and hearing-impaired listeners. Fusion and fission boundaries were measured using 6-tone complexes with tones equally spaced in log frequency. An ABA-ABA- sequence was used in which A represents a multitone complex ranging from either 250-1000 Hz (low-frequency region) or 1000-4000 Hz (high-frequency region). B also represents a multitone complex with same log spacing as A. Multitonal complexes were 100 ms in duration with 20-ms ramps, and- represents a silent interval of 100 ms. To measure the fusion boundary, the first tone of the B stimulus was either 375 Hz (low) or 1500 Hz (high) and shifted downward in frequency with each progressive ABA triplet until the listener pressed a button indicating that a ``galloping'' rhythm was heard. When measuring the fusion boundary, the first tone of the B stimulus was 252 or 1030 Hz and shifted upward with each triplet. Listeners then pressed a button when the ``galloping rhythm ended.'' Data suggest that hearing-impaired subjects have different fission and fusion boundaries than normal-hearing listeners. These data will be discussed in terms of both peripheral and central factors.

  2. Subglacial hydrology and the formation of ice streams.

    Science.gov (United States)

    Kyrke-Smith, T M; Katz, R F; Fowler, A C

    2014-01-08

    Antarctic ice streams are associated with pressurized subglacial meltwater but the role this water plays in the dynamics of the streams is not known. To address this, we present a model of subglacial water flow below ice sheets, and particularly below ice streams. The base-level flow is fed by subglacial melting and is presumed to take the form of a rough-bedded film, in which the ice is supported by larger clasts, but there is a millimetric water film which submerges the smaller particles. A model for the film is given by two coupled partial differential equations, representing mass conservation of water and ice closure. We assume that there is no sediment transport and solve for water film depth and effective pressure. This is coupled to a vertically integrated, higher order model for ice-sheet dynamics. If there is a sufficiently small amount of meltwater produced (e.g. if ice flux is low), the distributed film and ice sheet are stable, whereas for larger amounts of melt the ice-water system can become unstable, and ice streams form spontaneously as a consequence. We show that this can be explained in terms of a multi-valued sliding law, which arises from a simplified, one-dimensional analysis of the coupled model.

  3. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Ectopic external auditory canal and ossicular formation in the oculo-auriculo-vertebral spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Supakul, Nucharin [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ramathibodi Hospital, Mahidol University, Department of Diagnostic and Therapeutic Radiology, Bangkok (Thailand); Kralik, Stephen F. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ho, Chang Y. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Riley Children' s Hospital, MRI Department, Indianapolis, IN (United States)

    2015-07-15

    Ear abnormalities in oculo-auricular-vertebral spectrum commonly present with varying degrees of external and middle ear atresias, usually in the expected locations of the temporal bone and associated soft tissues, without ectopia of the external auditory canal. We present the unique imaging of a 4-year-old girl with right hemifacial microsomia and ectopic location of an atretic external auditory canal, terminating in a hypoplastic temporomandibular joint containing bony structures with the appearance of auditory ossicles. This finding suggests an early embryological dysfunction involving Meckel's cartilage of the first branchial arch. (orig.)

  5. FORMATION OF MASSIVE GALAXIES AT HIGH REDSHIFT: COLD STREAMS, CLUMPY DISKS, AND COMPACT SPHEROIDS

    International Nuclear Information System (INIS)

    Dekel, Avishai; Sari, Re'em; Ceverino, Daniel

    2009-01-01

    We present a simple theoretical framework for massive galaxies at high redshift, where the main assembly and star formation occurred, and report on the first cosmological simulations that reveal clumpy disks consistent with our analysis. The evolution is governed by the interplay between smooth and clumpy cold streams, disk instability, and bulge formation. Intense, relatively smooth streams maintain an unstable dense gas-rich disk. Instability with high turbulence and giant clumps, each a few percent of the disk mass, is self-regulated by gravitational interactions within the disk. The clumps migrate into a bulge in ∼ sun yr -1 , and each clump converts into stars in ∼0.5 Gyr. While the clumps coalesce dissipatively to a compact bulge, the star-forming disk is extended because the incoming streams keep the outer disk dense and susceptible to instability and because of angular momentum transport. Passive spheroid-dominated galaxies form when the streams are more clumpy: the external clumps merge into a massive bulge and stir up disk turbulence that stabilize the disk and suppress in situ clump and star formation. We predict a bimodality in galaxy type by z ∼ 3, involving giant-clump star-forming disks and spheroid-dominated galaxies of suppressed star formation. After z ∼ 1, the disks tend to be stabilized by the dominant stellar disks and bulges. Most of the high-z massive disks are likely to end up as today's early-type galaxies.

  6. STREAM

    DEFF Research Database (Denmark)

    Godsk, Mikkel

    This paper presents a flexible model, ‘STREAM’, for transforming higher science education into blended and online learning. The model is inspired by ideas of active and collaborative learning and builds on feedback strategies well-known from Just-in-Time Teaching, Flipped Classroom, and Peer...... Instruction. The aim of the model is to provide both a concrete and comprehensible design toolkit for adopting and implementing educational technologies in higher science teaching practice and at the same time comply with diverse ambitions. As opposed to the above-mentioned feedback strategies, the STREAM...... model supports a relatively diverse use of educational technologies and may also be used to transform teaching into completely online learning. So far both teachers and educational developers have positively received the model and the initial design experiences show promise....

  7. ROTATION OF THE MILKY WAY AND THE FORMATION OF THE MAGELLANIC STREAM

    International Nuclear Information System (INIS)

    Ruzicka, Adam; Theis, Christian; Palous, Jan

    2010-01-01

    We studied the impact of the revisited values for the local standard of rest (LSR) circular velocity of the Milky Way on the formation of the Magellanic Stream. The LSR circular velocity was varied within its observational uncertainties as a free parameter of the interaction between the Large (LMC) and the Small Magellanic Clouds (SMC) and the Galaxy. We have shown that the large-scale morphology and kinematics of the Magellanic Stream may be reproduced as tidal features, assuming the recent values of the proper motions of the Magellanic Clouds. Automated exploration of the entire parameter space for the interaction was performed to identify all parameter combinations that allow for modeling the Magellanic Stream. Satisfactory models exist for the dynamical mass of the Milky Way within a wide range of 0.6 x 10 12 -3.0 x 10 12 M sun and over the entire 1σ errors of the proper motions of the Clouds. However, the successful models share a common interaction scenario. The Magellanic Clouds are satellites of the Milky Way, and in all cases two close LMC-SMC encounters occurred within the last 4 Gyr at t < -2.5 Gyr and t ∼ -150 Myr, triggering the formation of the Stream and of the Magellanic Bridge, respectively. The latter encounter is encoded in the observed proper motions and inevitable in any model of the interaction. We conclude that the tidal origin of the Magellanic Stream implies the previously introduced LMC/SMC orbital history, unless the parameters of the interaction are revised substantially.

  8. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  9. X-ray photoevaporation's limited success in the formation of planetesimals by the streaming instability

    Science.gov (United States)

    Ercolano, Barbara; Jennings, Jeff; Rosotti, Giovanni; Birnstiel, Tilman

    2017-12-01

    The streaming instability is often invoked as solution to the fragmentation and drift barriers in planetesimal formation, catalysing the aggregation of dust on kyr time-scales to grow km-sized cores. However, there remains a lack of consensus on the physical mechanism(s) responsible for initiating it. One potential avenue is disc photoevaporation, wherein the preferential removal of relatively dust-free gas increases the disc metallicity. Late in the disc lifetime, photoevaporation dominates viscous accretion, creating a gradient in the depleted gas surface density near the location of the gap. This induces a local pressure maximum that collects drifting dust particles, which may then become susceptible to the streaming instability. Using a one-dimensional viscous evolution model of a disc subject to internal X-ray photoevaporation, we explore the efficacy of this process to build planetesimals. Over a range of parameters, we find that the amount of dust mass converted into planetesimals is often planetary cores. Our results are in contrast to a recent, similar investigation that considered an far-ultra-violet (FUV)-driven photoevaporation model and reported the formation of tens of M⊕ at large (>100 au) disc radii. The discrepancies are primarily a consequence of the different photoevaporation profiles assumed. Until observations more tightly constrain photoevaporation models, the relevance of this process to the formation of planets remains uncertain.

  10. Finding your mate at a cocktail party: frequency separation promotes auditory stream segregation of concurrent voices in multi-species frog choruses.

    Directory of Open Access Journals (Sweden)

    Vivek Nityananda

    Full Text Available Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music. By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate "auditory streams" that can be followed through time. In this study, we show that frequency separation (ΔF also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis with a pulsed target signal (simulating an attractive conspecific call in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call. When the ΔF between target and distractor was small (e.g., ≤3 semitones, females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6-12 semitones. These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate

  11. Modeling wood dynamics, jam formation, and sediment storage in a gravel-bed stream

    Science.gov (United States)

    Eaton, B. C.; Hassan, M. A.; Davidson, S. L.

    2012-12-01

    In small and intermediate sized streams, the interaction between wood and bed material transport often determines the nature of the physical habitat, which in turn influences the health of the stream's ecosystem. We present a stochastic model that can be used to simulate the effects on physical habitat of forest fires, climate change, and other environmental disturbances that alter wood recruitment. The model predicts large wood (LW) loads in a stream as well as the volume of sediment stored by the wood; while it is parameterized to describe gravel bed streams similar to a well-studied field prototype, Fishtrap Creek, British Columbia, it can be calibrated to other systems as well. In the model, LW pieces are produced and modified over time as a result of random tree-fall, LW breakage, LW movement, and piece interaction to form LW jams. Each LW piece traps a portion of the annual bed material transport entering the reach and releases the stored sediment when the LW piece is entrained and moved. The equations governing sediment storage are based on a set of flume experiments also scaled to the field prototype. The model predicts wood loads ranging from 70 m3/ha to more than 300 m3/ha, with a mean value of 178 m3/ha: both the range and the mean value are consistent with field data from streams with similar riparian forest types and climate. The model also predicts an LW jam spacing that is consistent with field data. Furthermore, our modeling results demonstrate that the high spatial and temporal variability in sediment storage, sediment transport, and channel morphology associated with LW-dominated streams occurs only when LW pieces interact and form jams. Model runs that do not include jam formation are much less variable. These results suggest that river restoration efforts using engineered LW pieces that are fixed in place and not permitted to interact will be less successful at restoring the geomorphic processes responsible for producing diverse, productive

  12. Hierarchical formation of dark matter halos and the free streaming scale

    International Nuclear Information System (INIS)

    Ishiyama, Tomoaki

    2014-01-01

    The smallest dark matter halos are formed first in the early universe. According to recent studies, the central density cusp is much steeper in these halos than in larger halos and scales as ρ∝r –(1.5-1.3) . We present the results of very large cosmological N-body simulations of the hierarchical formation and evolution of halos over a wide mass range, beginning from the formation of the smallest halos. We confirmed early studies that the inner density cusps are steeper in halos at the free streaming scale. The cusp slope gradually becomes shallower as the halo mass increases. The slope of halos 50 times more massive than the smallest halo is approximately –1.3. No strong correlation exists between the inner slope and the collapse epoch. The cusp slope of halos above the free streaming scale seems to be reduced primarily due to major merger processes. The concentration, estimated at the present universe, is predicted to be 60-70, consistent with theoretical models and earlier simulations, and ruling out simple power law mass-concentration relations. Microhalos could still exist in the present universe with the same steep density profiles.

  13. Formation of transformation products from wastewater-derived pharmaceuticals in an urban lowland stream

    Science.gov (United States)

    Jäger, A.; Posselt, M.; Schaper, J. L.; Lewandowski, J.

    2017-12-01

    Not only transport, but especially transformation of polar organic micropollutants in urban streams is of increasing concern for urban water management. While concentrations of pharmaceuticals might decrease down the river, concentrations of their more persistent metabolites potentially increase due to microbial transformation. The river Erpe, an urban lowland stream located in Berlin, Germany, receives high loads of treated waste water. A Lagrangian sampling scheme was applied to follow water parcels 4.7 km down the river using the diurnal fluctuations of electrical conductivity as an intrinsic conservative tracer. Each experiment comprised of hourly sample collection for two days, accompanied by discharge measurements and continuous data logging of electrical conductivity. The fate of pharmaceuticals and their transformation products was compared between seasons (April and June) and before and after a stretch of the river has been cleared of macrophytes. The set of micropollutants was analysed by a newly developed direct injection-UHPLC-MS/MS method. The behaviour of individual micropollutants was compound-specific. Valsartan and metoprolol were attenuated by up to 18% of their original concentration. At the same time the transformation products valsartan acid and metoprolol acid increased in concentration by up to 24%. Their formation along the reach varied between seasons and was influenced by macrophyte removal. The findings indicate that the self-purification capacity of urban rivers is variable in time and sensitive to changes in the river's hydrological regime and emphasize the relevance of formation of transformation products in urban rivers.

  14. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech

    Science.gov (United States)

    Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas

    2017-06-01

    Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n  =  7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.

  15. RECENT STAR FORMATION IN THE LEADING ARM OF THE MAGELLANIC STREAM

    Energy Technology Data Exchange (ETDEWEB)

    Casetti-Dinescu, Dana I. [Department of Physics, Southern Connecticut State University, 501 Crescent Street, New Haven, CT 06515 (United States); Bidin, Christian Moni [Instituto de Astronomía, Universidad Católica del Norte, Avenue Angamos 0610, Antofagasta (Chile); Girard, Terrence M.; Van Altena, William F. [Astronomy Department, Yale University, 260 Whitney Avenue, New Haven, CT 06511 (United States); Méndez, Réne A. [Departmento de Astronomía, Universidad de Chile, Casilla 36-D, Santiago (Chile); Vieira, Katherine [Centro de Investigaciones de Astronomía, Apartado Postal 264, Mérida 5101-A (Venezuela, Bolivarian Republic of); Korchagin, Vladimir I., E-mail: casettid1@southernct.edu, E-mail: dana.casetti@yale.edu, E-mail: terry.girard@yale.edu, E-mail: william.vanaltena@yale.edu, E-mail: chr.moni.bidin@gmail.com, E-mail: ramendez.uchile@gmail.com, E-mail: kvieira@cida.ve, E-mail: vkorchagin@sfedu.ru [Institute of Physics, Southern Federal University, Stachki Street 124, 344090, Rostov-on-Don (Russian Federation)

    2014-04-01

    Strongly interacting galaxies undergo a short-lived but dramatic phase of evolution characterized by enhanced star formation, tidal tails, bridges, and other morphological peculiarities. The nearest example of a pair of interacting galaxies is the Magellanic Clouds, whose dynamical interaction produced the gaseous features known as the Magellanic Stream trailing the pair's orbit about the Galaxy, the bridge between the Clouds, and the leading arm (LA), a wide and irregular feature leading the orbit. Young, newly formed stars in the bridge are known to exist, giving witness to the recent interaction between the Clouds. However, the interaction of the Clouds with the Milky Way (MW) is less well understood. In particular, the LA must have a tidal origin; however, no purely gravitational model is able to reproduce its morphology and kinematics. A hydrodynamical interaction with the gaseous hot halo and disk of the Galaxy is plausible as suggested by some models and supporting neutral hydrogen (H I) observations. Here we show for the first time that young, recently formed stars exist in the LA, indicating that the interaction between the Clouds and our Galaxy is strong enough to trigger star formation in certain regions of the LA—regions in the outskirts of the MW disk (R ∼ 18 kpc), far away from the Clouds and the bridge.

  16. RECENT STAR FORMATION IN THE LEADING ARM OF THE MAGELLANIC STREAM

    International Nuclear Information System (INIS)

    Casetti-Dinescu, Dana I.; Bidin, Christian Moni; Girard, Terrence M.; Van Altena, William F.; Méndez, Réne A.; Vieira, Katherine; Korchagin, Vladimir I.

    2014-01-01

    Strongly interacting galaxies undergo a short-lived but dramatic phase of evolution characterized by enhanced star formation, tidal tails, bridges, and other morphological peculiarities. The nearest example of a pair of interacting galaxies is the Magellanic Clouds, whose dynamical interaction produced the gaseous features known as the Magellanic Stream trailing the pair's orbit about the Galaxy, the bridge between the Clouds, and the leading arm (LA), a wide and irregular feature leading the orbit. Young, newly formed stars in the bridge are known to exist, giving witness to the recent interaction between the Clouds. However, the interaction of the Clouds with the Milky Way (MW) is less well understood. In particular, the LA must have a tidal origin; however, no purely gravitational model is able to reproduce its morphology and kinematics. A hydrodynamical interaction with the gaseous hot halo and disk of the Galaxy is plausible as suggested by some models and supporting neutral hydrogen (H I) observations. Here we show for the first time that young, recently formed stars exist in the LA, indicating that the interaction between the Clouds and our Galaxy is strong enough to trigger star formation in certain regions of the LA—regions in the outskirts of the MW disk (R ∼ 18 kpc), far away from the Clouds and the bridge

  17. Shock formation and structure in magnetic reconnection with a streaming flow.

    Science.gov (United States)

    Wu, Liangneng; Ma, Zhiwei; Zhang, Haowei

    2017-08-18

    The features of magnetic reconnection with a streaming flow have been investigated on the basis of compressible resistive magnetohydrodynamic (MHD) model. The super-Alfvenic streaming flow largely enhances magnetic reconnection. The maximum reconnection rate is almost four times larger with super-Alfvenic streaming flow than sub-Alfvénic streaming flow. In the nonlinear stage, it is found that there is a pair of shocks observed in the inflow region, which are manifested to be slow shocks for sub-Alfvénic streaming flow, and fast shocks for super-Alfvénic streaming flow. The quasi-period oscillation of reconnection rates in the decaying phase for super-Alfvénic streaming flow is resulted from the different drifting velocities of the shock and the X point.

  18. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  19. Multi-format music use at the intersection of downloading and streaming practices

    DEFF Research Database (Denmark)

    Ægidius, Andreas Lenander

    2017-01-01

    in accordance with the licensed agreements between the music industry and the IT-industry. The listeners seem unaffected by the technological changes and even the less tech-savvy listeners can easily stream-rip to produce music files from stream, which they then re-commodify by adjusting the metadata inscribed......This paper will investigate the restructuring of digital online music use related to the remediation of the music download as a music stream. The paper draws on the empirical findings of my PhD-study based on qualitative interviews with young listeners (n16), professional musicians (n10......) and distributors from Spotify, TDC Play, Tidal, and 24/7 Entertainment (n4). Interviewing three different social groups (n30 total) represents a unique approach with which to answer the question how music files are understood and used in the intersection between download-based and stream-based music practices...

  20. Formation of magnetized plasma stream in the CTCC-I experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ikegami, K.; Ozaki, A.; Uyama, T.; Satomi, N.; Watenabe, K. (Osaka Univ., Suita (Japan). Faculty of Engineering)

    1981-10-01

    Magnetized plasma stream with the kinetic energy of more than 500 eV was produced successfully using a coaxial plasma gun with the subsidiary coils for providing the radial magnetic field at its muzzle. It was injected into the drift tube and the characteristics were investigated experimentally using the streak photographs, magnetic probes and flux loops. It was confirmed that this plasma stream had really both toroidal and poloidal magnetic fields.

  1. Formation of magnetized plasma stream in the CTCC-I experiment

    International Nuclear Information System (INIS)

    Ikegami, Kazunori; Ozaki, Atsuhiko; Uyama, Tadao; Satomi, Norio; Watanabe, Kenji

    1981-01-01

    Magnetized plasma stream with the kinetic energy of more than 500 eV was produced successfully using a coaxial plasma gun with the subsidiary coils for providing the radial magnetic field at its muzzle. It was injected into the drift tube and the characteristics were investigated experimentally using the streak photographs, magnetic probes and flux loops. It was confirmed that this plasma stream had really both toroidal and poloidal magnetic fields. (author)

  2. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  3. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  4. Ultrashort electromagnetic clusters formation by two-stream superheterodyne free electron lasers

    DEFF Research Database (Denmark)

    Kulish, Viktor V.; Lysenko, Alexander V.; Volk, Iurii I.

    2016-01-01

    A cubic nonlinear self-consistent theory of multiharmonic two-stream superheterodyne free electron lasers (TSFEL) of a klystron type, intended to form powerful ultrashort clusters of an electromagnetic field is constructed. Plural three-wave parametric resonant interactions of wave harmonics have...... been taken into account. An amplitude, phase and spectral analyses of the processes occurring in such devices have been carried out. The conditions necessary for the forming of the ultrashort clusters of an electromagnetic field have been found out. The possibility of the ultrashort electromagnetic...

  5. Auditory short-term memory trace formation for nonspeech and speech in SLI and dyslexia as indexed by the N100 and mismatch negativity electrophysiological responses.

    Science.gov (United States)

    Tuomainen, Outi T

    2015-04-15

    This study investigates nonspeech and speech processing in specific language impairment (SLI) and dyslexia. We used a passive mismatch negativity (MMN) task to tap automatic brain responses and an active behavioural task to tap attended discrimination of nonspeech and speech sounds. Using the roving standard MMN paradigm, we varied the number of standards ('few' vs. 'many') to investigate the effect of sound repetition on N100 and MMN responses. The results revealed that the SLI group needed more repetitions than dyslexics and controls to create a strong enough sensory trace to elicit MMN. In contrast, in the behavioural task, we observed good discrimination of speech and nonspeech in all groups. The findings indicate that auditory processing deficits in SLI and dyslexia are dissociable and that memory trace formation may be implicated in SLI.

  6. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Rapid-rate transcranial magnetic stimulation of animal auditory cortex impairs short-term but not long-term memory formation.

    Science.gov (United States)

    Wang, Hong; Wang, Xu; Wetzel, Wolfram; Scheich, Henning

    2006-04-01

    Bilateral rapid-rate transcranial magnetic stimulation (rTMS) of gerbil auditory cortex with a miniature coil device was used to study short-term and long-term effects on discrimination learning of frequency-modulated tones. We found previously that directional discrimination of frequency modulation (rising vs. falling) relies on auditory cortex processing and that formation of its memory depends on local protein synthesis. Here we show that, during training over 5 days, certain rTMS regimes contingent on training had differential effects on the time course of learning. When rTMS was applied several times per day, i.e. four blocks of 5 min rTMS each followed 5 min later by a 3-min training block and 15-min intervals between these blocks (experiment A), animals reached a high discrimination performance more slowly over 5 days than did controls. When rTMS preceded only the first two of four training blocks (experiment B), or when prolonged rTMS (20 min) preceded only the first block, or when blocks of experiment A had longer intervals (experiments C and D), no significant day-to-day effects were found. However, in experiment A, and to some extent in experiment B, rTMS reduced the within-session discrimination performance. Nevertheless the animals learned, as demonstrated by a higher performance the next day. Thus, our results indicate that rTMS treatments accumulate over a day but not strongly over successive days. We suggest that rTMS of sensory cortex, as used in our study, affects short-term memory but not long-term memory formation.

  8. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  9. Auditory Neuropathy

    Science.gov (United States)

    ... children and adults with auditory neuropathy. Cochlear implants (electronic devices that compensate for damaged or nonworking parts ... and Drug Administration: Information on Cochlear Implants Telecommunications Relay Services Your Baby's Hearing Screening News Deaf health ...

  10. Supersonic gas streams enhance the formation of massive black holes in the early universe.

    Science.gov (United States)

    Hirano, Shingo; Hosokawa, Takashi; Yoshida, Naoki; Kuiper, Rolf

    2017-09-29

    The origin of super-massive black holes in the early universe remains poorly understood. Gravitational collapse of a massive primordial gas cloud is a promising initial process, but theoretical studies have difficulty growing the black hole fast enough. We report numerical simulations of early black hole formation starting from realistic cosmological conditions. Supersonic gas motions left over from the Big Bang prevent early gas cloud formation until rapid gas condensation is triggered in a protogalactic halo. A protostar is formed in the dense, turbulent gas cloud, and it grows by sporadic mass accretion until it acquires 34,000 solar masses. The massive star ends its life with a catastrophic collapse to leave a black hole-a promising seed for the formation of a monstrous black hole. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  11. Auditory hallucinations.

    Science.gov (United States)

    Blom, Jan Dirk

    2015-01-01

    Auditory hallucinations constitute a phenomenologically rich group of endogenously mediated percepts which are associated with psychiatric, neurologic, otologic, and other medical conditions, but which are also experienced by 10-15% of all healthy individuals in the general population. The group of phenomena is probably best known for its verbal auditory subtype, but it also includes musical hallucinations, echo of reading, exploding-head syndrome, and many other types. The subgroup of verbal auditory hallucinations has been studied extensively with the aid of neuroimaging techniques, and from those studies emerges an outline of a functional as well as a structural network of widely distributed brain areas involved in their mediation. The present chapter provides an overview of the various types of auditory hallucination described in the literature, summarizes our current knowledge of the auditory networks involved in their mediation, and draws on ideas from the philosophy of science and network science to reconceptualize the auditory hallucinatory experience, and point out directions for future research into its neurobiologic substrates. In addition, it provides an overview of known associations with various clinical conditions and of the existing evidence for pharmacologic and non-pharmacologic treatments. © 2015 Elsevier B.V. All rights reserved.

  12. Highly efficient star formation in NGC 5253 possibly from stream-fed accretion.

    Science.gov (United States)

    Turner, J L; Beck, S C; Benford, D J; Consiglio, S M; Ho, P T P; Kovács, A; Meier, D S; Zhao, J-H

    2015-03-19

    Gas clouds in present-day galaxies are inefficient at forming stars. Low star-formation efficiency is a critical parameter in galaxy evolution: it is why stars are still forming nearly 14 billion years after the Big Bang and why star clusters generally do not survive their births, instead dispersing to form galactic disks or bulges. Yet the existence of ancient massive bound star clusters (globular clusters) in the Milky Way suggests that efficiencies were higher when they formed ten billion years ago. A local dwarf galaxy, NGC 5253, has a young star cluster that provides an example of highly efficient star formation. Here we report the detection of the J = 3→2 rotational transition of CO at the location of the massive cluster. The gas cloud is hot, dense, quiescent and extremely dusty. Its gas-to-dust ratio is lower than the Galactic value, which we attribute to dust enrichment by the embedded star cluster. Its star-formation efficiency exceeds 50 per cent, tenfold that of clouds in the Milky Way. We suggest that high efficiency results from the force-feeding of star formation by a streamer of gas falling into the galaxy.

  13. Post-spike hyperpolarization participates in the formation of auditory behavior-related response patterns of inferior collicular neurons in Hipposideros pratti.

    Science.gov (United States)

    Li, Y-L; Fu, Z-Y; Yang, M-J; Wang, J; Peng, K; Yang, L-J; Tang, J; Chen, Q-C

    2015-03-19

    To probe the mechanism underlying the auditory behavior-related response patterns of inferior collicular neurons to constant frequency-frequency modulation (CF-FM) stimulus in Hipposideros pratti, we studied the role of post-spike hyperpolarization (PSH) in the formation of response patterns. Neurons obtained by in vivo extracellular (N=145) and intracellular (N=171) recordings could be consistently classified into single-on (SO) and double-on (DO) neurons. Using intracellular recording, we found that both SO and DO neurons have a PSH with different durations. Statistical analysis showed that most SO neurons had a longer PSH duration than DO neurons (p<0.01). These data suggested that the PSH directly participated in the formation of SO and DO neurons, and the PSH elicited by the CF component was the main synaptic mechanism underlying the SO and DO response patterns. The possible biological significance of these findings relevant to bat echolocation is discussed. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  15. Dual-stream modulation failure: a novel hypothesis for the formation and maintenance of delusions in schizophrenia.

    Science.gov (United States)

    Speechley, William J; Ngan, Elton T C

    2008-01-01

    Delusions, a cardinal feature of schizophrenia, are characterized by the development and preservation of false beliefs despite reason and evidence to the contrary. A number of cognitive models have made important contributions to our understanding of delusions, though it remains unclear which core cognitive processes are malfunctioning to enable individuals with delusions to form and maintain erroneous beliefs. We propose a modified dual-stream processing model that provides a viable and testable mechanism that can account for this debilitating symptom. Dual-stream models divide decision-making into two streams: a fast, intuitive and automatic form of processing (Stream 1); and a slower, conscious and deliberative process (Stream 2). Our novel model proposes two key influences on the way these streams interact in everyday decision-making: conflict and emotion. Conflict: in most decision-making scenarios one obvious answer presents itself and the two streams converge onto the same conclusion. However, in instances where there are competing alternative possibilities, an individual often experiences dissonance, or a sense of conflict. The detection of this conflict biases processing towards the more deliberative Stream 2. Emotion: highly emotional states can result in behavior that is reflexive and action-oriented. This may be due to the power of emotionally valenced stimuli to bias reasoning towards Stream 1. We propose that in schizophrenia, an abnormal response to these two influences results in a pathological schism between Stream 1 and Stream 2, enabling erroneous intuitive explanations to coexist with contrary logical explanations of the same event. Specifically, we suggest that delusions are the result of a failure to reconcile the two streams due to both a failure of conflict to bias decision-making towards Stream 2 and an accentuated emotional bias towards Stream 1.

  16. Auditory motion-specific mechanisms in the primate brain.

    Directory of Open Access Journals (Sweden)

    Colline Poirier

    2017-05-01

    Full Text Available This work examined the mechanisms underlying auditory motion processing in the auditory cortex of awake monkeys using functional magnetic resonance imaging (fMRI. We tested to what extent auditory motion analysis can be explained by the linear combination of static spatial mechanisms, spectrotemporal processes, and their interaction. We found that the posterior auditory cortex, including A1 and the surrounding caudal belt and parabelt, is involved in auditory motion analysis. Static spatial and spectrotemporal processes were able to fully explain motion-induced activation in most parts of the auditory cortex, including A1, but not in circumscribed regions of the posterior belt and parabelt cortex. We show that in these regions motion-specific processes contribute to the activation, providing the first demonstration that auditory motion is not simply deduced from changes in static spatial location. These results demonstrate that parallel mechanisms for motion and static spatial analysis coexist within the auditory dorsal stream.

  17. Diminished auditory sensory gating during active auditory verbal hallucinations.

    Science.gov (United States)

    Thoma, Robert J; Meier, Andrew; Houck, Jon; Clark, Vincent P; Lewine, Jeffrey D; Turner, Jessica; Calhoun, Vince; Stephen, Julia

    2017-10-01

    Auditory sensory gating, assessed in a paired-click paradigm, indicates the extent to which incoming stimuli are filtered, or "gated", in auditory cortex. Gating is typically computed as the ratio of the peak amplitude of the event related potential (ERP) to a second click (S2) divided by the peak amplitude of the ERP to a first click (S1). Higher gating ratios are purportedly indicative of incomplete suppression of S2 and considered to represent sensory processing dysfunction. In schizophrenia, hallucination severity is positively correlated with gating ratios, and it was hypothesized that a failure of sensory control processes early in auditory sensation (gating) may represent a larger system failure within the auditory data stream; resulting in auditory verbal hallucinations (AVH). EEG data were collected while patients (N=12) with treatment-resistant AVH pressed a button to indicate the beginning (AVH-on) and end (AVH-off) of each AVH during a paired click protocol. For each participant, separate gating ratios were computed for the P50, N100, and P200 components for each of the AVH-off and AVH-on states. AVH trait severity was assessed using the Psychotic Symptoms Rating Scales AVH Total score (PSYRATS). The results of a mixed model ANOVA revealed an overall effect for AVH state, such that gating ratios were significantly higher during the AVH-on state than during AVH-off for all three components. PSYRATS score was significantly and negatively correlated with N100 gating ratio only in the AVH-off state. These findings link onset of AVH with a failure of an empirically-defined auditory inhibition system, auditory sensory gating, and pave the way for a sensory gating model of AVH. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Multichannel auditory search: toward understanding control processes in polychotic auditory listening.

    Science.gov (United States)

    Lee, M D

    2001-01-01

    Two experiments are presented that serve as a framework for exploring auditory information processing. The framework is referred to as polychotic listening or auditory search, and it requires a listener to scan multiple simultaneous auditory streams for the appearance of a target word (the name of a letter such as A or M). Participants' ability to scan between two and six simultaneous auditory streams of letter and digit names for the name of a target letter was examined using six loudspeakers. The main independent variable was auditory load, or the number of active audio streams on a given trial. The primary dependent variables were target localization accuracy and reaction time. Results showed that as load increased, performance decreased. The performance decrease was evident in reaction time, accuracy, and sensitivity measures. The second study required participants to practice the same task for 10 sessions, for a total of 1800 trials. Results indicated that even with extensive practice, performance was still affected by auditory load. The present results are compared with findings in the visual search literature. The implications for the use of multiple auditory displays are discussed. Potential applications include cockpit and automobile warning displays, virtual reality systems, and training systems.

  19. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  20. Objective assessment of stream segregation abilities of CI users as a function of electrode separation

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Madsen, Sara Miay Kim; Dau, Torsten

    Auditory streaming is a perceptual process by which the human auditory system organizes sounds from different sources into perceptually meaningful elements. Segregation of sound sources is important, among others, for understanding speech in noisy environments, which is especially challenging...

  1. Dynamics of auditory working memory

    Directory of Open Access Journals (Sweden)

    Jochen eKaiser

    2015-05-01

    Full Text Available Working memory denotes the ability to retain stimuli in mind that are no longer physically present and to perform mental operations on them. Electro- and magnetoencephalography allow investigating the short-term maintenance of acoustic stimuli at a high temporal resolution. Studies investigating working memory for non-spatial and spatial auditory information have suggested differential roles of regions along the putative auditory ventral and dorsal streams, respectively, in the processing of the different sound properties. Analyses of event-related potentials have shown sustained, memory load-dependent deflections over the retention periods. The topography of these waves suggested an involvement of modality-specific sensory storage regions. Spectral analysis has yielded information about the temporal dynamics of auditory working memory processing of individual stimuli, showing activation peaks during the delay phase whose timing was related to task performance. Coherence at different frequencies was enhanced between frontal and sensory cortex. In summary, auditory working memory seems to rely on the dynamic interplay between frontal executive systems and sensory representation regions.

  2. The spectrotemporal filter mechanism of auditory selective attention

    Science.gov (United States)

    Lakatos, Peter; Musacchia, Gabriella; O’Connell, Monica N.; Falchier, Arnaud Y.; Javitt, Daniel C.; Schroeder, Charles E.

    2013-01-01

    SUMMARY While we have convincing evidence that attention to auditory stimuli modulates neuronal responses at or before the level of primary auditory cortex (A1), the underlying physiological mechanisms are unknown. We found that attending to rhythmic auditory streams resulted in the entrainment of ongoing oscillatory activity reflecting rhythmic excitability fluctuations in A1. Strikingly, while the rhythm of the entrained oscillations in A1 neuronal ensembles reflected the temporal structure of the attended stream, the phase depended on the attended frequency content. Counter-phase entrainment across differently tuned A1 regions resulted in both the amplification and sharpening of responses at attended time points, in essence acting as a spectrotemporal filter mechanism. Our data suggest that selective attention generates a dynamically evolving model of attended auditory stimulus streams in the form of modulatory subthreshold oscillations across tonotopically organized neuronal ensembles in A1 that enhances the representation of attended stimuli. PMID:23439126

  3. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  4. Dynamics of dissolved organic matter during four storm events in two forest streams: source, export, and implications for harmful disinfection byproduct formation.

    Science.gov (United States)

    Yang, Liyang; Hur, Jin; Lee, Sonmin; Chang, Soon-Woong; Shin, Hyun-Sang

    2015-06-01

    Dynamics of river dissolved organic matter (DOM) during storm events have profound influences on the downstream aquatic ecosystem and drinking water safety. This study investigated temporal variations in DOM during four storm events in two forest headwater streams (the EH and JH brooks, South Korea) and the impacts on the disinfection byproducts (DBPs) formation potential. The within-event variations of most DOM quantity parameters were similar to the flow rate in the EH but not in the larger JH brook. The dissolved organic carbon (DOC) showed clockwise and counterclockwise hysteresis with the flow rate in the EH and JH brooks, respectively, indicating the importance of both flow path and DOM source pool size in determining the effects of storm events. The stream DOM became less aromatic/humified from the first to the last event in both brooks, probably due to the increasing fresh plant pool and the decreasing leaf litter pool during the course of rainy season. The DOC export during each event increased 1.3-2.7- and 1.1-7.0-fold by stormflows in the EH and JH brooks, respectively. The leaf litter and soil together was the major DOM source, particularly during early events. The enhanced DOM export probably increases the risks of DBPs formation in disinfection, as indicated by a strong correlation observed between DOC and trihalomethanes formation potential (THMFP). High correlations between two humic-like fluorescent components and THMFP further suggested the potential of assessing THMFP with in situ fluorescence sensors during storms.

  5. Influence of ultrasound power on acoustic streaming and micro-bubbles formations in a low frequency sono-reactor: mathematical and 3D computational simulation.

    Science.gov (United States)

    Sajjadi, Baharak; Raman, Abdul Aziz Abdul; Ibrahim, Shaliza

    2015-05-01

    This paper aims at investigating the influence of ultrasound power amplitude on liquid behaviour in a low-frequency (24 kHz) sono-reactor. Three types of analysis were employed: (i) mechanical analysis of micro-bubbles formation and their activities/characteristics using mathematical modelling. (ii) Numerical analysis of acoustic streaming, fluid flow pattern, volume fraction of micro-bubbles and turbulence using 3D CFD simulation. (iii) Practical analysis of fluid flow pattern and acoustic streaming under ultrasound irradiation using Particle Image Velocimetry (PIV). In mathematical modelling, a lone micro bubble generated under power ultrasound irradiation was mechanistically analysed. Its characteristics were illustrated as a function of bubble radius, internal temperature and pressure (hot spot conditions) and oscillation (pulsation) velocity. The results showed that ultrasound power significantly affected the conditions of hotspots and bubbles oscillation velocity. From the CFD results, it was observed that the total volume of the micro-bubbles increased by about 4.95% with each 100 W-increase in power amplitude. Furthermore, velocity of acoustic streaming increased from 29 to 119 cm/s as power increased, which was in good agreement with the PIV analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Influence of two-stream relativistic electron beam parameters on the space-charge wave with broad frequency spectrum formation

    Science.gov (United States)

    Alexander, LYSENKO; Iurii, VOLK

    2018-03-01

    We developed a cubic non-linear theory describing the dynamics of the multiharmonic space-charge wave (SCW), with harmonics frequencies smaller than the two-stream instability critical frequency, with different relativistic electron beam (REB) parameters. The self-consistent differential equation system for multiharmonic SCW harmonic amplitudes was elaborated in a cubic non-linear approximation. This system considers plural three-wave parametric resonant interactions between wave harmonics and the two-stream instability effect. Different REB parameters such as the input angle with respect to focusing magnetic field, the average relativistic factor value, difference of partial relativistic factors, and plasma frequency of partial beams were investigated regarding their influence on the frequency spectrum width and multiharmonic SCW saturation levels. We suggested ways in which the multiharmonic SCW frequency spectrum widths could be increased in order to use them in multiharmonic two-stream superheterodyne free-electron lasers, with the main purpose of forming a powerful multiharmonic electromagnetic wave.

  7. Auditory Perspective Taking

    National Research Council Canada - National Science Library

    Martinson, Eric; Brock, Derek

    2006-01-01

    .... From this knowledge of another's auditory perspective, a conversational partner can then adapt his or her auditory output to overcome a variety of environmental challenges and insure that what is said is intelligible...

  8. DWARFS GOBBLING DWARFS: A STELLAR TIDAL STREAM AROUND NGC 4449 AND HIERARCHICAL GALAXY FORMATION ON SMALL SCALES

    International Nuclear Information System (INIS)

    Martínez-Delgado, David; Rix, Hans-Walter; Macciò, Andrea V.; Romanowsky, Aaron J.; Arnold, Jacob A.; Brodie, Jean P.; Jay Gabany, R.; Annibali, Francesca; Fliri, Jürgen; Zibetti, Stefano; Van der Marel, Roeland P.; Aloisi, Alessandra; Chonis, Taylor S.; Carballo-Bello, Julio A.; Gallego-Laborda, J.; Merrifield, Michael R.

    2012-01-01

    A candidate diffuse stellar substructure was previously reported in the halo of the nearby dwarf starburst galaxy NGC 4449 by Karachentsev et al. We map and analyze this feature using a unique combination of deep integrated-light images from the BlackBird 0.5 m telescope, and high-resolution wide-field images from the 8 m Subaru Telescope, which resolve the nebulosity into a stream of red giant branch stars, and confirm its physical association with NGC 4449. The properties of the stream imply a massive dwarf spheroidal progenitor, which after complete disruption will deposit an amount of stellar mass that is comparable to the existing stellar halo of the main galaxy. The stellar mass ratio between the two galaxies is ∼1:50, while the indirectly measured dynamical mass ratio, when including dark matter, may be ∼1:10-1:5. This system may thus represent a 'stealth' merger, where an infalling satellite galaxy is nearly undetectable by conventional means, yet has a substantial dynamical influence on its host galaxy. This singular discovery also suggests that satellite accretion can play a significant role in building up the stellar halos of low-mass galaxies, and possibly in triggering their starbursts.

  9. Streams with Strahler Stream Order

    Data.gov (United States)

    Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...

  10. Neurophysiological evidence for context-dependent encoding of sensory input in human auditory cortex.

    Science.gov (United States)

    Sussman, Elyse; Steinschneider, Mitchell

    2006-02-23

    Attention biases the way in which sound information is stored in auditory memory. Little is known, however, about the contribution of stimulus-driven processes in forming and storing coherent sound events. An electrophysiological index of cortical auditory change detection (mismatch negativity [MMN]) was used to assess whether sensory memory representations could be biased toward one organization over another (one or two auditory streams) without attentional control. Results revealed that sound representations held in sensory memory biased the organization of subsequent auditory input. The results demonstrate that context-dependent sound representations modulate stimulus-dependent neural encoding at early stages of auditory cortical processing.

  11. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  12. Late Quaternary stream piracy and strath terrace formation along the Belle Fourche and lower Cheyenne Rivers, South Dakota and Wyoming

    Science.gov (United States)

    Stamm, John F.; Hendricks, Robert R.; Sawyer, J. Foster; Mahan, Shannon A.; Zaprowski, Brent J.; Geibel, Nicholas M.; Azzolini, David C.

    2013-09-01

    Stream piracy substantially affected the geomorphic evolution of the Missouri River watershed and drainages within, including the Little Missouri, Cheyenne, Belle Fourche, Bad, and White Rivers. The ancestral Cheyenne River eroded headward in an annular pattern around the eastern and southern Black Hills and pirated the headwaters of the ancestral Bad and White Rivers after ~ 660 ka. The headwaters of the ancestral Little Missouri River were pirated by the ancestral Belle Fourche River, a tributary to the Cheyenne River that currently drains much of the northern Black Hills. Optically stimulated luminescence (OSL) dating techniques were used to estimate the timing of this piracy event at ~ 22-21 ka. The geomorphic evolution of the Cheyenne and Belle Fourche Rivers is also expressed by regionally recognized strath terraces that include (from oldest to youngest) the Sturgis, Bear Butte, and Farmingdale terraces. Radiocarbon and OSL dates from fluvial deposits on these terraces indicate incision to the level of the Bear Butte terrace by ~ 63 ka, incision to the level of the Farmingdale terrace at ~ 40 ka, and incision to the level of the modern channel after ~ 12-9 ka. Similar dates of terrace incision have been reported for the Laramie and Wind River Ranges. Hypothesized causes of incision are the onset of colder climate during the middle Wisconsinan and the transition to the full-glacial climate of the late-Wisconsinan/Pinedale glaciation. Incision during the Holocene of the lower Cheyenne River is as much as ~ 80 m and is 3 to 4 times the magnitude of incision at ~ 63 ka and ~ 40 ka. The magnitude of incision during the Holocene might be due to a combined effect of three geomorphic processes acting in concert: glacial isostatic rebound in lower reaches (~ 40 m), a change from glacial to interglacial climate, and adjustments to increased watershed area resulting from piracy of the ancestral headwaters of the Little Missouri River.

  13. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  14. Attending to auditory memory.

    Science.gov (United States)

    Zimmermann, Jacqueline F; Moscovitch, Morris; Alain, Claude

    2016-06-01

    Attention to memory describes the process of attending to memory traces when the object is no longer present. It has been studied primarily for representations of visual stimuli with only few studies examining attention to sound object representations in short-term memory. Here, we review the interplay of attention and auditory memory with an emphasis on 1) attending to auditory memory in the absence of related external stimuli (i.e., reflective attention) and 2) effects of existing memory on guiding attention. Attention to auditory memory is discussed in the context of change deafness, and we argue that failures to detect changes in our auditory environments are most likely the result of a faulty comparison system of incoming and stored information. Also, objects are the primary building blocks of auditory attention, but attention can also be directed to individual features (e.g., pitch). We review short-term and long-term memory guided modulation of attention based on characteristic features, location, and/or semantic properties of auditory objects, and propose that auditory attention to memory pathways emerge after sensory memory. A neural model for auditory attention to memory is developed, which comprises two separate pathways in the parietal cortex, one involved in attention to higher-order features and the other involved in attention to sensory information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  16. Temporal envelope processing in the human auditory cortex: response and interconnections of auditory cortical areas.

    Science.gov (United States)

    Gourévitch, Boris; Le Bouquin Jeannès, Régine; Faucon, Gérard; Liégeois-Chauvel, Catherine

    2008-03-01

    Temporal envelope processing in the human auditory cortex has an important role in language analysis. In this paper, depth recordings of local field potentials in response to amplitude modulated white noises were used to design maps of activation in primary, secondary and associative auditory areas and to study the propagation of the cortical activity between them. The comparison of activations between auditory areas was based on a signal-to-noise ratio associated with the response to amplitude modulation (AM). The functional connectivity between cortical areas was quantified by the directed coherence (DCOH) applied to auditory evoked potentials. This study shows the following reproducible results on twenty subjects: (1) the primary auditory cortex (PAC), the secondary cortices (secondary auditory cortex (SAC) and planum temporale (PT)), the insular gyrus, the Brodmann area (BA) 22 and the posterior part of T1 gyrus (T1Post) respond to AM in both hemispheres. (2) A stronger response to AM was observed in SAC and T1Post of the left hemisphere independent of the modulation frequency (MF), and in the left BA22 for MFs 8 and 16Hz, compared to those in the right. (3) The activation and propagation features emphasized at least four different types of temporal processing. (4) A sequential activation of PAC, SAC and BA22 areas was clearly visible at all MFs, while other auditory areas may be more involved in parallel processing upon a stream originating from primary auditory area, which thus acts as a distribution hub. These results suggest that different psychological information is carried by the temporal envelope of sounds relative to the rate of amplitude modulation.

  17. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  18. Stream Crossings

    Data.gov (United States)

    Vermont Center for Geographic Information — Physical measurements and attributes of stream crossing structures and adjacent stream reaches which are used to provide a relative rating of aquatic organism...

  19. Formation and stability of manganese-doped ZnS quantum dot monolayers determined by QCM-D and streaming potential measurements.

    Science.gov (United States)

    Oćwieja, Magdalena; Matras-Postołek, Katarzyna; Maciejewska-Prończuk, Julia; Morga, Maria; Adamczyk, Zbigniew; Sovinska, Svitlana; Żaba, Adam; Gajewska, Marta; Król, Tomasz; Cupiał, Klaudia; Bredol, Michael

    2017-10-01

    Manganese-doped ZnS quantum dots (QDs) stabilized by cysteamine hydrochloride were successfully synthesized. Their thorough physicochemical characteristics were acquired using UV-Vis absorption and photoluminescence spectroscopy, X-ray diffraction, dynamic light scattering (DLS), transmission electron microscopy (HR-TEM), energy dispersive spectroscopy (EDS) and Fourier transform infrared (FT-IR) spectroscopy. The average particle size, derived from HR-TEM, was 3.1nm, which agrees with the hydrodynamic diameter acquired by DLS, that was equal to 3-4nm, depending on ionic strength. The quantum dots also exhibited a large positive zeta potential varying between 75 and 36mV for ionic strength of 10 -4 and 10 -2 M, respectively (at pH 6.2) and an intense luminescent emission at 590nm. The quantum yield was equal to 31% and the optical band gap energy was equal to 4.26eV. The kinetics of QD monolayer formation on silica substrates (silica sensors and oxidized silicon wafers) under convection-controlled transport was quantitatively evaluated by the quartz crystal microbalance (QCM) and the streaming potential measurements. A high stability of the monolayer for ionic strength 10 -4 and 10 -2 M was confirmed in these measurements. The experimental data were adequately reflected by the extended random sequential adsorption model (eRSA). Additionally, thorough electrokinetic characteristics of the QD monolayers and their stability for various ionic strengths and pH were acquired by streaming potential measurements carried out under in situ conditions. These results were quantitatively interpreted in terms of the three-dimensional (3D) electrokinetic model that furnished bulk zeta potential of particles for high ionic strengths that is impractical by other experimental techniques. It is concluded that these results can be used for designing of biosensors of controlled monolayer structure capable to bind various ligands via covalent as well as electrostatic interactions

  20. No disillusions in auditory extinction: perceiving a melody comprised of unperceived notes

    Directory of Open Access Journals (Sweden)

    Leon Y Deouell

    2008-03-01

    Full Text Available The formation of coherent percepts requires grouping together spatio-temporally disparate sensory inputs. Two major questions arise: (1 is awareness necessary for this process; and (2 can non-conscious elements of the sensory input be grouped into a conscious perceptµ To address this question, we tested two patients suffering from severe left auditory extinction following right hemisphere damage. In extinction, patients are unaware of the presence of left side stimuli when they are presented simultaneously with right side stimuli. We used the ‘scale illusion’ to test whether extinguished tones on the left can be incorporated into the content of conscious awareness. In the scale illusion, healthy listeners obtain the illusion of distinct melodies, which are the result of grouping of information from both ears into illusory auditory streams. We show that the two patients were susceptible to the scale illusion while being consciously unaware of the stimuli presented on their left. This suggests that awareness is not necessary for auditory grouping and non-conscious elements can be incorporated into a conscious percept.

  1. Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated sounds.

    Science.gov (United States)

    Ciaramitaro, Vivian M; Chow, Hiu Mei; Eglington, Luke G

    2017-03-01

    We used a cross-modal dual task to examine how changing visual-task demands influenced auditory processing, namely auditory thresholds for amplitude- and frequency-modulated sounds. Observers had to attend to two consecutive intervals of sounds and report which interval contained the auditory stimulus that was modulated in amplitude (Experiment 1) or frequency (Experiment 2). During auditory-stimulus presentation, observers simultaneously attended to a rapid sequential visual presentation-two consecutive intervals of streams of visual letters-and had to report which interval contained a particular color (low load, demanding less attentional resources) or, in separate blocks of trials, which interval contained more of a target letter (high load, demanding more attentional resources). We hypothesized that if attention is a shared resource across vision and audition, an easier visual task should free up more attentional resources for auditory processing on an unrelated task, hence improving auditory thresholds. Auditory detection thresholds were lower-that is, auditory sensitivity was improved-for both amplitude- and frequency-modulated sounds when observers engaged in a less demanding (compared to a more demanding) visual task. In accord with previous work, our findings suggest that visual-task demands can influence the processing of auditory information on an unrelated concurrent task, providing support for shared attentional resources. More importantly, our results suggest that attending to information in a different modality, cross-modal attention, can influence basic auditory contrast sensitivity functions, highlighting potential similarities between basic mechanisms for visual and auditory attention.

  2. Akamai Streaming

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    Akamai offers world-class streaming media services that enable Internet content providers and enterprises to succeed in today's Web-centric marketplace. They deliver live event Webcasts (complete with video production, encoding, and signal acquisition services), streaming media on demand, 24/7 Webcasts and a variety of streaming application services based upon their EdgeAdvantage.

  3. HOW THE DENSITY ENVIRONMENT CHANGES THE INFLUENCE OF THE DARK MATTER–BARYON STREAMING VELOCITY ON COSMOLOGICAL STRUCTURE FORMATION

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Kyungjin, E-mail: kjahn@chosun.ac.kr [Department of Earth Sciences, Chosun University, Gwangju 61452 (Korea, Republic of)

    2016-10-20

    We study the dynamical effect of the relative velocity between dark matter and baryonic fluids, which remained supersonic after the epoch of recombination. The impact of this supersonic motion on the formation of cosmological structures was first formulated by Tseliakhovich and Hirata, in terms of the linear theory of small-scale fluctuations coupled to large-scale, relative velocities in mean-density regions. In their formalism, they limited the large-scale density environment to be that of the global mean density. We improve on their formulation by allowing variation in the density environment as well as the relative velocities. This leads to a new type of coupling between large-scale and small-scale modes. We find that the small-scale fluctuation grows in a biased way: faster in the overdense environment and slower in the underdense environment. We also find that the net effect on the global power spectrum of the density fluctuation is to boost its overall amplitude from the prediction by Tseliakhovich and Hirata. Correspondingly, the conditional mass function of cosmological halos and the halo bias parameter are both affected in a similar way. The discrepancy between our prediction and that of Tseliakhovich and Hirata is significant, and therefore, the related cosmology and high-redshift astrophysics should be revisited. The mathematical formalism of this study can be used for generating cosmological initial conditions of small-scale perturbations in generic, overdense (underdense) background patches.

  4. Differential Recruitment of Auditory Cortices in the Consolidation of Recent Auditory Fearful Memories.

    Science.gov (United States)

    Cambiaghi, Marco; Grosso, Anna; Renna, Annamaria; Sacchetti, Benedetto

    2016-08-17

    Memories of frightening events require a protracted consolidation process. Sensory cortex, such as the auditory cortex, is involved in the formation of fearful memories with a more complex sensory stimulus pattern. It remains controversial, however, whether the auditory cortex is also required for fearful memories related to simple sensory stimuli. In the present study, we found that, 1 d after training, the temporary inactivation of either the most anterior region of the auditory cortex, including the primary (Te1) cortex, or the most posterior region, which included the secondary (Te2) component, did not affect the retention of recent memories, which is consistent with the current literature. However, at this time point, the inactivation of the entire auditory cortices completely prevented the formation of new memories. Amnesia was site specific and was not due to auditory stimuli perception or processing and strictly related to the interference with memory consolidation processes. Strikingly, at a late time interval 4 d after training, blocking the posterior part (encompassing the Te2) alone impaired memory retention, whereas the inactivation of the anterior part (encompassing the Te1) left memory unaffected. Together, these data show that the auditory cortex is necessary for the consolidation of auditory fearful memories related to simple tones in rats. Moreover, these results suggest that, at early time intervals, memory information is processed in a distributed network composed of both the anterior and the posterior auditory cortical regions, whereas, at late time intervals, memory processing is concentrated in the most posterior part containing the Te2 region. Memories of threatening experiences undergo a prolonged process of "consolidation" to be maintained for a long time. The dynamic of fearful memory consolidation is poorly understood. Here, we show that 1 d after learning, memory is processed in a distributed network composed of both primary Te1 and

  5. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  6. Discovering Structure in Auditory Input: Evidence from Williams Syndrome

    Science.gov (United States)

    Elsabbagh, Mayada; Cohen, Henri; Karmiloff-Smith, Annette

    2010-01-01

    We examined auditory perception in Williams syndrome by investigating strategies used in organizing sound patterns into coherent units. In Experiment 1, we investigated the streaming of sound sequences into perceptual units, on the basis of pitch cues, in a group of children and adults with Williams syndrome compared to typical controls. We showed…

  7. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  8. Hierarchical auditory processing directed rostrally along the monkey's supratemporal plane.

    Science.gov (United States)

    Kikuchi, Yukiko; Horwitz, Barry; Mishkin, Mortimer

    2010-09-29

    Connectional anatomical evidence suggests that the auditory core, containing the tonotopic areas A1, R, and RT, constitutes the first stage of auditory cortical processing, with feedforward projections from core outward, first to the surrounding auditory belt and then to the parabelt. Connectional evidence also raises the possibility that the core itself is serially organized, with feedforward projections from A1 to R and with additional projections, although of unknown feed direction, from R to RT. We hypothesized that area RT together with more rostral parts of the supratemporal plane (rSTP) form the anterior extension of a rostrally directed stimulus quality processing stream originating in the auditory core area A1. Here, we analyzed auditory responses of single neurons in three different sectors distributed caudorostrally along the supratemporal plane (STP): sector I, mainly area A1; sector II, mainly area RT; and sector III, principally RTp (the rostrotemporal polar area), including cortex located 3 mm from the temporal tip. Mean onset latency of excitation responses and stimulus selectivity to monkey calls and other sounds, both simple and complex, increased progressively from sector I to III. Also, whereas cells in sector I responded with significantly higher firing rates to the "other" sounds than to monkey calls, those in sectors II and III responded at the same rate to both stimulus types. The pattern of results supports the proposal that the STP contains a rostrally directed, hierarchically organized auditory processing stream, with gradually increasing stimulus selectivity, and that this stream extends from the primary auditory area to the temporal pole.

  9. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  10. Multiple time scales of adaptation in auditory cortex neurons.

    Science.gov (United States)

    Ulanovsky, Nachum; Las, Liora; Farkas, Dina; Nelken, Israel

    2004-11-17

    Neurons in primary auditory cortex (A1) of cats show strong stimulus-specific adaptation (SSA). In probabilistic settings, in which one stimulus is common and another is rare, responses to common sounds adapt more strongly than responses to rare sounds. This SSA could be a correlate of auditory sensory memory at the level of single A1 neurons. Here we studied adaptation in A1 neurons, using three different probabilistic designs. We showed that SSA has several time scales concurrently, spanning many orders of magnitude, from hundreds of milliseconds to tens of seconds. Similar time scales are known for the auditory memory span of humans, as measured both psychophysically and using evoked potentials. A simple model, with linear dependence on both short-term and long-term stimulus history, provided a good fit to A1 responses. Auditory thalamus neurons did not show SSA, and their responses were poorly fitted by the same model. In addition, SSA increased the proportion of failures in the responses of A1 neurons to the adapting stimulus. Finally, SSA caused a bias in the neuronal responses to unbiased stimuli, enhancing the responses to eccentric stimuli. Therefore, we propose that a major function of SSA in A1 neurons is to encode auditory sensory memory on multiple time scales. This SSA might play a role in stream segregation and in binding of auditory objects over many time scales, a property that is crucial for processing of natural auditory scenes in cats and of speech and music in humans.

  11. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  12. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  13. The Overview and Appliance of some Streaming Video software solutions

    OpenAIRE

    Qin , Yan

    2010-01-01

    This paper introduces the basic streaming media technology, the streaming media system structure, principles of streaming media technology; streaming media file formats and so on. After that, it discusses the use streaming media in distance education, broadband video on demand, Internet broadcasting, video conferences and a more detailed exposition in streaming media. As the existing technology has been unable to satisfy the increasing needs of the Internet users, the streaming media technol...

  14. Stream systems.

    Science.gov (United States)

    Jack E. Williams; Gordon H. Reeves

    2006-01-01

    Restored, high-quality streams provide innumerable benefits to society. In the Pacific Northwest, high-quality stream habitat often is associated with an abundance of salmonid fishes such as chinook salmon (Oncorhynchus tshawytscha), coho salmon (O. kisutch), and steelhead (O. mykiss). Many other native...

  15. Evidence of functional connectivity between auditory cortical areas revealed by amplitude modulation sound processing.

    Science.gov (United States)

    Guéguin, Marie; Le Bouquin-Jeannès, Régine; Faucon, Gérard; Chauvel, Patrick; Liégeois-Chauvel, Catherine

    2007-02-01

    The human auditory cortex includes several interconnected areas. A better understanding of the mechanisms involved in auditory cortical functions requires a detailed knowledge of neuronal connectivity between functional cortical regions. In human, it is difficult to track in vivo neuronal connectivity. We investigated the interarea connection in vivo in the auditory cortex using a method of directed coherence (DCOH) applied to depth auditory evoked potentials (AEPs). This paper presents simultaneous AEPs recordings from insular gyrus (IG), primary and secondary cortices (Heschl's gyrus and planum temporale), and associative areas (Brodmann area [BA] 22) with multilead intracerebral electrodes in response to sinusoidal modulated white noises in 4 epileptic patients who underwent invasive monitoring with depth electrodes for epilepsy surgery. DCOH allowed estimation of the causality between 2 signals recorded from different cortical sites. The results showed 1) a predominant auditory stream within the primary auditory cortex from the most medial region to the most lateral one whatever the modulation frequency, 2) unidirectional functional connection from the primary to secondary auditory cortex, 3) a major auditory propagation from the posterior areas to the anterior ones, particularly at 8, 16, and 32 Hz, and 4) a particular role of Heschl's sulcus dispatching information to the different auditory areas. These findings suggest that cortical processing of auditory information is performed in serial and parallel streams. Our data showed that the auditory propagation could not be associated to a unidirectional traveling wave but to a constant interaction between these areas that could reflect the large adaptive and plastic capacities of auditory cortex. The role of the IG is discussed.

  16. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  17. Stream Evaluation

    Data.gov (United States)

    Kansas Data Access and Support Center — Digital representation of the map accompanying the "Kansas stream and river fishery resource evaluation" (R.E. Moss and K. Brunson, 1981.U.S. Fish and Wildlife...

  18. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  19. Modularity in Sensory Auditory Memory

    OpenAIRE

    Clement, Sylvain; Moroni, Christine; Samson, Séverine

    2004-01-01

    The goal of this paper was to review various experimental and neuropsychological studies that support the modular conception of auditory sensory memory or auditory short-term memory. Based on initial findings demonstrating that verbal sensory memory system can be dissociated from a general auditory memory store at the functional and anatomical levels. we reported a series of studies that provided evidence in favor of multiple auditory sensory stores specialized in retaining eit...

  20. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  1. Integration of Visual Information in Auditory Cortex Promotes Auditory Scene Analysis through Multisensory Binding.

    Science.gov (United States)

    Atilgan, Huriye; Town, Stephen M; Wood, Katherine C; Jones, Gareth P; Maddox, Ross K; Lee, Adrian K C; Bizley, Jennifer K

    2018-02-07

    How and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  2. Auditory Memory for Timbre

    Science.gov (United States)

    McKeown, Denis; Wellsted, David

    2009-01-01

    Psychophysical studies are reported examining how the context of recent auditory stimulation may modulate the processing of new sounds. The question posed is how recent tone stimulation may affect ongoing performance in a discrimination task. In the task, two complex sounds occurred in successive intervals. A single target component of one complex…

  3. Auditory evacuation beacons

    NARCIS (Netherlands)

    Wijngaarden, S.J. van; Bronkhorst, A.W.; Boer, L.C.

    2005-01-01

    Auditory evacuation beacons can be used to guide people to safe exits, even when vision is totally obscured by smoke. Conventional beacons make use of modulated noise signals. Controlled evacuation experiments show that such signals require explicit instructions and are often misunderstood. A new

  4. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    Most sounds in our environment, including speech, music, animal vocalizations and environmental noise, have fluctuations in intensity that are often highly correlated across different frequency regions. Because across-frequency modulation is so common, the ability to process such information...... is thought to be a powerful survival strategy in the natural world (Klump, 1996; Nelken et al., 1999). Coherent modulations in one sound can aid in the detection of another sound (Hall et al., 1984; Durlach, 1963). On the other hand, modulation in one frequency region can also impede the detection...... or discrimination of modulation in other frequency regions (Yost et al., 1989). Although the neural substrates for across-frequency modulation processing remain unclear, recent studies have concentrated on brainstem structures (Pressnitzer et al., 2001). In this study it is shown that sounds occurring after...

  5. Role of hydrous iron oxide formation in attenuation and diel cycling of dissolved trace metals in a stream affected by acid rock drainage

    Science.gov (United States)

    Parker, S.R.; Gammons, C.H.; Jones, Clain A.; Nimick, D.A.

    2007-01-01

    Mining-impacted streams have been shown to undergo diel (24-h) fluctuations in concentrations of major and trace elements. Fisher Creek in south-central Montana, USA receives acid rock drainage (ARD) from natural and mining-related sources. A previous diel field study found substantial changes in dissolved metal concentrations at three sites with differing pH regimes during a 24-h period in August 2002. The current work discusses follow-up field sampling of Fisher Creek as well as field and laboratory experiments that examine in greater detail the underlying processes involved in the observed diel concentration changes. The field experiments employed in-stream chambers that were either transparent or opaque to light, filled with stream water and sediment (cobbles coated with hydrous Fe and Al oxides), and placed in the stream to maintain the same temperature. Three sets of laboratory experiments were performed: (1) equilibration of a Cu(II) and Zn(II) containing solution with Fisher Creek stream sediment at pH 6.9 and different temperatures; (2) titration of Fisher Creek water from pH 3.1 to 7 under four different isothermal conditions; and (3) analysis of the effects of temperature on the interaction of an Fe(II) containing solution with Fisher Creek stream sediment under non-oxidizing conditions. Results of these studies are consistent with a model in which Cu, Fe(II), and to a lesser extent Zn, are adsorbed or co-precipitated with hydrous Fe and Al oxides as the pH of Fisher Creek increases from 5.3 to 7.0. The extent of metal attenuation is strongly temperature-dependent, being more pronounced in warm vs. cold water. Furthermore, the sorption/co-precipitation process is shown to be irreversible; once the Cu, Zn, and Fe(II) are removed from solution in warm water, a decrease in temperature does not release the metals back to the water column. ?? 2006 Springer Science+Business Media B.V.

  6. Temporal integration of sequential auditory events: silent period in sound pattern activates human planum temporale.

    Science.gov (United States)

    Mustovic, Henrietta; Scheffler, Klaus; Di Salle, Francesco; Esposito, Fabrizio; Neuhoff, John G; Hennig, Jürgen; Seifritz, Erich

    2003-09-01

    Temporal integration is a fundamental process that the brain carries out to construct coherent percepts from serial sensory events. This process critically depends on the formation of memory traces reconciling past with present events and is particularly important in the auditory domain where sensory information is received both serially and in parallel. It has been suggested that buffers for transient auditory memory traces reside in the auditory cortex. However, previous studies investigating "echoic memory" did not distinguish between brain response to novel auditory stimulus characteristics on the level of basic sound processing and a higher level involving matching of present with stored information. Here we used functional magnetic resonance imaging in combination with a regular pattern of sounds repeated every 100 ms and deviant interspersed stimuli of 100-ms duration, which were either brief presentations of louder sounds or brief periods of silence, to probe the formation of auditory memory traces. To avoid interaction with scanner noise, the auditory stimulation sequence was implemented into the image acquisition scheme. Compared to increased loudness events, silent periods produced specific neural activation in the right planum temporale and temporoparietal junction. Our findings suggest that this area posterior to the auditory cortex plays a critical role in integrating sequential auditory events and is involved in the formation of short-term auditory memory traces. This function of the planum temporale appears to be fundamental in the segregation of simultaneous sound sources.

  7. Development of the auditory system

    Science.gov (United States)

    Litovsky, Ruth

    2015-01-01

    Auditory development involves changes in the peripheral and central nervous system along the auditory pathways, and these occur naturally, and in response to stimulation. Human development occurs along a trajectory that can last decades, and is studied using behavioral psychophysics, as well as physiologic measurements with neural imaging. The auditory system constructs a perceptual space that takes information from objects and groups, segregates sounds, and provides meaning and access to communication tools such as language. Auditory signals are processed in a series of analysis stages, from peripheral to central. Coding of information has been studied for features of sound, including frequency, intensity, loudness, and location, in quiet and in the presence of maskers. In the latter case, the ability of the auditory system to perform an analysis of the scene becomes highly relevant. While some basic abilities are well developed at birth, there is a clear prolonged maturation of auditory development well into the teenage years. Maturation involves auditory pathways. However, non-auditory changes (attention, memory, cognition) play an important role in auditory development. The ability of the auditory system to adapt in response to novel stimuli is a key feature of development throughout the nervous system, known as neural plasticity. PMID:25726262

  8. Auditory interfaces: The human perceiver

    Science.gov (United States)

    Colburn, H. Steven

    1991-01-01

    A brief introduction to the basic auditory abilities of the human perceiver with particular attention toward issues that may be important for the design of auditory interfaces is presented. The importance of appropriate auditory inputs to observers with normal hearing is probably related to the role of hearing as an omnidirectional, early warning system and to its role as the primary vehicle for communication of strong personal feelings.

  9. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  10. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  11. An investigation of reaction parameters on geochemical storage of non-pure CO2 streams in iron oxides-bearing formations

    Energy Technology Data Exchange (ETDEWEB)

    Garcia, Susana; Liu, Q.; Bacon, Diana H.; Maroto-Valer, M. M.

    2014-08-26

    Hematite deposit that is the main FeIII-bearing mineral in sedimentary red beds was proposed as a potential host repository for converting CO2 into carbonate minerals such as siderite (FeCO3), when CO2–SO2 gas mixtures are co-injected. This work investigated CO2 mineral trapping using hematite and sensitivity of the reactive systems to different parameters, including particle size, gas composition, temperature, pressure, and solid-to-liquid ratio. Experimental and modelling studies of hydrothermal experiments were conducted, which emulated a CO2 sequestration scenario by injecting CO2-SO2 gas streams into a NaCl-NaOH brine hosted in iron oxide-containing aquifer. This study provides novel information on the mineralogical changes and fluid chemistry derived from the co-injection of CO2-SO2 gas mixtures in hematite deposit. It can be concluded that the amount of siderite precipitate depends primarily on the SO2 content of the gas stream. Increasing SO2 content in the system could promote the reduction of Fe3+ from the hematite sample to Fe2+, which will be further available for its precipitation as siderite. Moreover, siderite precipitation is enhanced at low temperatures and high pressures. The influence of the solid to liquid ratio on the overall carbonation reaction suggests that the conversion increases if the system becomes more diluted.

  12. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  13. Auditory Reserve and the Legacy of Auditory Experience

    Directory of Open Access Journals (Sweden)

    Erika Skoe

    2014-11-01

    Full Text Available Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence on sensory processing may be less long-lasting and may potentially fade over time if not repeated. This auditory reserve may help to explain individual differences in how individuals cope with auditory impoverishment or loss of sensorineural function.

  14. Auditory changes in acromegaly.

    Science.gov (United States)

    Tabur, S; Korkmaz, H; Baysal, E; Hatipoglu, E; Aytac, I; Akarsu, E

    2017-06-01

    The aim of this study is to determine the changes involving auditory system in cases with acromegaly. Otological examinations of 41 cases with acromegaly (uncontrolled n = 22, controlled n = 19) were compared with those of age and gender-matched 24 healthy subjects. Whereas the cases with acromegaly underwent examination with pure tone audiometry (PTA), speech audiometry for speech discrimination (SD), tympanometry, stapedius reflex evaluation and otoacoustic emission tests, the control group did only have otological examination and PTA. Additionally, previously performed paranasal sinus-computed tomography of all cases with acromegaly and control subjects were obtained to measure the length of internal acoustic canal (IAC). PTA values were higher (p acromegaly group was narrower compared to that in control group (p = 0.03 for right ears and p = 0.02 for left ears). When only cases with acromegaly were taken into consideration, PTA values in left ears had positive correlation with growth hormone and insulin-like growth factor-1 levels (r = 0.4, p = 0.02 and r = 0.3, p = 0.03). Of all cases with acromegaly 13 (32%) had hearing loss in at least one ear, 7 (54%) had sensorineural type and 6 (46%) had conductive type hearing loss. Acromegaly may cause certain changes in the auditory system in cases with acromegaly. The changes in the auditory system may be multifactorial causing both conductive and sensorioneural defects.

  15. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.

  16. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  17. Word Recognition in Auditory Cortex

    Science.gov (United States)

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  18. Prestimulus subsequent memory effects for auditory and visual events.

    Science.gov (United States)

    Otten, Leun J; Quayle, Angela H; Puvaneswaran, Bhamini

    2010-06-01

    It has been assumed that the effective encoding of information into memory primarily depends on neural activity elicited when an event is initially encountered. Recently, it has been shown that memory formation also relies on neural activity just before an event. The precise role of such activity in memory is currently unknown. Here, we address whether prestimulus activity affects the encoding of auditory and visual events, is set up on a trial-by-trial basis, and varies as a function of the type of recognition judgment an item later receives. Electrical brain activity was recorded from the scalps of 24 healthy young adults while they made semantic judgments on randomly intermixed series of visual and auditory words. Each word was preceded by a cue signaling the modality of the upcoming word. Auditory words were preceded by auditory cues and visual words by visual cues. A recognition memory test with remember/know judgments followed after a delay of about 45 min. As observed previously, a negative-going, frontally distributed modulation just before visual word onset predicted later recollection of the word. Crucially, the same effect was found for auditory words and observed on stay as well as switch trials. These findings emphasize the flexibility and general role of prestimulus activity in memory formation, and support a functional interpretation of the activity in terms of semantic preparation. At least with an unpredictable trial sequence, the activity is set up anew on each trial.

  19. Intrinsic Connections of the Core Auditory Cortical Regions and Rostral Supratemporal Plane in the Macaque Monkey.

    Science.gov (United States)

    Scott, Brian H; Leccese, Paul A; Saleem, Kadharbatcha S; Kikuchi, Yukiko; Mullarkey, Matthew P; Fukushima, Makoto; Mishkin, Mortimer; Saunders, Richard C

    2017-01-01

    In the ventral stream of the primate auditory cortex, cortico-cortical projections emanate from the primary auditory cortex (AI) along 2 principal axes: one mediolateral, the other caudorostral. Connections in the mediolateral direction from core, to belt, to parabelt, have been well described, but less is known about the flow of information along the supratemporal plane (STP) in the caudorostral dimension. Neuroanatomical tracers were injected throughout the caudorostral extent of the auditory core and rostral STP by direct visualization of the cortical surface. Auditory cortical areas were distinguished by SMI-32 immunostaining for neurofilament, in addition to established cytoarchitectonic criteria. The results describe a pathway comprising step-wise projections from AI through the rostral and rostrotemporal fields of the core (R and RT), continuing to the recently identified rostrotemporal polar field (RTp) and the dorsal temporal pole. Each area was strongly and reciprocally connected with the areas immediately caudal and rostral to it, though deviations from strictly serial connectivity were observed. In RTp, inputs converged from core, belt, parabelt, and the auditory thalamus, as well as higher order cortical regions. The results support a rostrally directed flow of auditory information with complex and recurrent connections, similar to the ventral stream of macaque visual cortex. Published by Oxford University Press 2015. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  20. Sustained selective attention to competing amplitude-modulations in human auditory cortex.

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control.

  1. Sustained Selective Attention to Competing Amplitude-Modulations in Human Auditory Cortex

    Science.gov (United States)

    Riecke, Lars; Scharke, Wolfgang; Valente, Giancarlo; Gutschalk, Alexander

    2014-01-01

    Auditory selective attention plays an essential role for identifying sounds of interest in a scene, but the neural underpinnings are still incompletely understood. Recent findings demonstrate that neural activity that is time-locked to a particular amplitude-modulation (AM) is enhanced in the auditory cortex when the modulated stream of sounds is selectively attended to under sensory competition with other streams. However, the target sounds used in the previous studies differed not only in their AM, but also in other sound features, such as carrier frequency or location. Thus, it remains uncertain whether the observed enhancements reflect AM-selective attention. The present study aims at dissociating the effect of AM frequency on response enhancement in auditory cortex by using an ongoing auditory stimulus that contains two competing targets differing exclusively in their AM frequency. Electroencephalography results showed a sustained response enhancement for auditory attention compared to visual attention, but not for AM-selective attention (attended AM frequency vs. ignored AM frequency). In contrast, the response to the ignored AM frequency was enhanced, although a brief trend toward response enhancement occurred during the initial 15 s. Together with the previous findings, these observations indicate that selective enhancement of attended AMs in auditory cortex is adaptive under sustained AM-selective attention. This finding has implications for our understanding of cortical mechanisms for feature-based attentional gain control. PMID:25259525

  2. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  3. A Dual-Stream Neuroanatomy of Singing.

    Science.gov (United States)

    Loui, Psyche

    2015-02-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment.

  4. Infant auditory short-term memory for non-linguistic sounds.

    Science.gov (United States)

    Ross-Sheehy, Shannon; Newman, Rochelle S

    2015-04-01

    This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  6. Statistical learning of recurring sound patterns encodes auditory objects in songbird forebrain.

    Science.gov (United States)

    Lu, Kai; Vicario, David S

    2014-10-07

    Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.

  7. Modality effects in delayed free recall and recognition: visual is better than auditory.

    Science.gov (United States)

    Penney, C G

    1989-08-01

    During presentation of auditory and visual lists of words, different groups of subjects generated words that either rhymed with the presented words or that were associates. Immediately after list presentation, subjects recalled either the presented or the generated words. After presentation and test of all lists, a final free recall test and a recognition test were given. Visual presentation generally produced higher recall and recognition than did auditory presentation for both encoding conditions. The results are not consistent with explanations of modality effects in terms of echoic memory or greater temporal distinctiveness of auditory items. The results are more in line with the separate-streams hypothesis, which argues for different kinds of input processing for auditory and visual items.

  8. Developmental programming of auditory learning

    Directory of Open Access Journals (Sweden)

    Melania Puddu

    2012-10-01

    Full Text Available The basic structures involved in the development of auditory function and consequently in language acquisition are directed by genetic code, but the expression of individual genes may be altered by exposure to environmental factors, which if favorable, orient it in the proper direction, leading its development towards normality, if unfavorable, they deviate it from its physiological course. Early sensorial experience during the foetal period (i.e. intrauterine noise floor, sounds coming from the outside and attenuated by the uterine filter, particularly mother’s voice and modifications induced by it at the cochlear level represent the first example of programming in one of the earliest critical periods in development of the auditory system. This review will examine the factors that influence the developmental programming of auditory learning from the womb to the infancy. In particular it focuses on the following points: the prenatal auditory experience and the plastic phenomena presumably induced by it in the auditory system from the basilar membrane to the cortex;the involvement of these phenomena on language acquisition and on the perception of language communicative intention after birth;the consequences of auditory deprivation in critical periods of auditory development (i.e. premature interruption of foetal life.

  9. Neural entrainment to rhythmically-presented auditory, visual and audio-visual speech in children

    Directory of Open Access Journals (Sweden)

    Alan James Power

    2012-07-01

    Full Text Available Auditory cortical oscillations have been proposed to play an important role in speech perception. It is suggested that the brain may take temporal ‘samples’ of information from the speech stream at different rates, phase-resetting ongoing oscillations so that they are aligned with similar frequency bands in the input (‘phase locking’. Information from these frequency bands is then bound together for speech perception. To date, there are no explorations of neural phase-locking and entrainment to speech input in children. However, it is clear from studies of language acquisition that infants use both visual speech information and auditory speech information in learning. In order to study neural entrainment to speech in typically-developing children, we use a rhythmic entrainment paradigm (underlying 2 Hz or delta rate based on repetition of the syllable ba, presented in either the auditory modality alone, the visual modality alone, or as auditory-visual speech (via a talking head. To ensure attention to the task, children aged 13 years were asked to press a button as fast as possible when the ba stimulus violated the rhythm for each stream type. Rhythmic violation depended on delaying the occurrence of a ba in the isochronous stream. Neural entrainment was demonstrated for all stream types, and individual differences in standardized measures of language processing were related to auditory entrainment at the theta rate. Further, there was significant modulation of the preferred phase of auditory entrainment in the theta band when visual speech cues were present, indicating cross-modal phase resetting. The rhythmic entrainment paradigm developed here offers a method for exploring individual differences in oscillatory phase locking during development. In particular, a method for assessing neural entrainment and cross-modal phase resetting would be useful for exploring developmental learning difficulties thought to involve temporal sampling

  10. Electrophysiological evidence for altered visual, but not auditory, selective attention in adolescent cochlear implant users.

    Science.gov (United States)

    Harris, Jill; Kamke, Marc R

    2014-11-01

    Selective attention fundamentally alters sensory perception, but little is known about the functioning of attention in individuals who use a cochlear implant. This study aimed to investigate visual and auditory attention in adolescent cochlear implant users. Event related potentials were used to investigate the influence of attention on visual and auditory evoked potentials in six cochlear implant users and age-matched normally-hearing children. Participants were presented with streams of alternating visual and auditory stimuli in an oddball paradigm: each modality contained frequently presented 'standard' and infrequent 'deviant' stimuli. Across different blocks attention was directed to either the visual or auditory modality. For the visual stimuli attention boosted the early N1 potential, but this effect was larger for cochlear implant users. Attention was also associated with a later P3 component for the visual deviant stimulus, but there was no difference between groups in the later attention effects. For the auditory stimuli, attention was associated with a decrease in N1 latency as well as a robust P3 for the deviant tone. Importantly, there was no difference between groups in these auditory attention effects. The results suggest that basic mechanisms of auditory attention are largely normal in children who are proficient cochlear implant users, but that visual attention may be altered. Ultimately, a better understanding of how selective attention influences sensory perception in cochlear implant users will be important for optimising habilitation strategies. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  12. Auditory short-term memory in the primate auditory cortex

    OpenAIRE

    Scott, Brian H.; Mishkin, Mortimer

    2015-01-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ���working memory��� bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive sho...

  13. Defining Auditory-Visual Objects: Behavioral Tests and Physiological Mechanisms.

    Science.gov (United States)

    Bizley, Jennifer K; Maddox, Ross K; Lee, Adrian K C

    2016-02-01

    Crossmodal integration is a term applicable to many phenomena in which one sensory modality influences task performance or perception in another sensory modality. We distinguish the term binding as one that should be reserved specifically for the process that underpins perceptual object formation. To unambiguously differentiate binding form other types of integration, behavioral and neural studies must investigate perception of a feature orthogonal to the features that link the auditory and visual stimuli. We argue that supporting true perceptual binding (as opposed to other processes such as decision-making) is one role for cross-sensory influences in early sensory cortex. These early multisensory interactions may therefore form a physiological substrate for the bottom-up grouping of auditory and visual stimuli into auditory-visual (AV) objects. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Maps of the Auditory Cortex.

    Science.gov (United States)

    Brewer, Alyssa A; Barton, Brian

    2016-07-08

    One of the fundamental properties of the mammalian brain is that sensory regions of cortex are formed of multiple, functionally specialized cortical field maps (CFMs). Each CFM comprises two orthogonal topographical representations, reflecting two essential aspects of sensory space. In auditory cortex, auditory field maps (AFMs) are defined by the combination of tonotopic gradients, representing the spectral aspects of sound (i.e., tones), with orthogonal periodotopic gradients, representing the temporal aspects of sound (i.e., period or temporal envelope). Converging evidence from cytoarchitectural and neuroimaging measurements underlies the definition of 11 AFMs across core and belt regions of human auditory cortex, with likely homology to those of macaque. On a macrostructural level, AFMs are grouped into cloverleaf clusters, an organizational structure also seen in visual cortex. Future research can now use these AFMs to investigate specific stages of auditory processing, key for understanding behaviors such as speech perception and multimodal sensory integration.

  15. Demodulation Processes in Auditory Perception

    National Research Council Canada - National Science Library

    Feth, Lawrence

    1997-01-01

    The long range goal of this project was the understanding of human auditory processing of information conveyed by complex, time varying signals such as speech, music or important environmental sounds...

  16. Streaming simplification of tetrahedral meshes.

    Science.gov (United States)

    Vo, Huy T; Callahan, Steven P; Lindstrom, Peter; Pascucci, Valerio; Silva, Cláudio T

    2007-01-01

    Unstructured tetrahedral meshes are commonly used in scientific computing to represent scalar, vector, and tensor fields in three dimensions. Visualization of these meshes can be difficult to perform interactively due to their size and complexity. By reducing the size of the data, we can accomplish real-time visualization necessary for scientific analysis. We propose a two-step approach for streaming simplification of large tetrahedral meshes. Our algorithm arranges the data on disk in a streaming, I/O-efficient format that allows coherent access to the tetrahedral cells. A quadric-based simplification is sequentially performed on small portions of the mesh in-core. Our output is a coherent streaming mesh which facilitates future processing. Our technique is fast, produces high quality approximations, and operates out-of-core to process meshes too large for main memory.

  17. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  18. Switching auditory attention using spatial and non-spatial features recruits different cortical networks.

    Science.gov (United States)

    Larson, Eric; Lee, Adrian K C

    2014-01-01

    Switching attention between different stimuli of interest based on particular task demands is important in many everyday settings. In audition in particular, switching attention between different speakers of interest that are talking concurrently is often necessary for effective communication. Recently, it has been shown by multiple studies that auditory selective attention suppresses the representation of unwanted streams in auditory cortical areas in favor of the target stream of interest. However, the neural processing that guides this selective attention process is not well understood. Here we investigated the cortical mechanisms involved in switching attention based on two different types of auditory features. By combining magneto- and electro-encephalography (M-EEG) with an anatomical MRI constraint, we examined the cortical dynamics involved in switching auditory attention based on either spatial or pitch features. We designed a paradigm where listeners were cued in the beginning of each trial to switch or maintain attention halfway through the presentation of concurrent target and masker streams. By allowing listeners time to switch during a gap in the continuous target and masker stimuli, we were able to isolate the mechanisms involved in endogenous, top-down attention switching. Our results show a double dissociation between the involvement of right temporoparietal junction (RTPJ) and the left inferior parietal supramarginal part (LIPSP) in tasks requiring listeners to switch attention based on space and pitch features, respectively, suggesting that switching attention based on these features involves at least partially separate processes or behavioral strategies. © 2013 Elsevier Inc. All rights reserved.

  19. Tuning In to Sound: Frequency-Selective Attentional Filter in Human Primary Auditory Cortex

    Science.gov (United States)

    Da Costa, Sandra; van der Zwaag, Wietske; Miller, Lee M.; Clarke, Stephanie

    2013-01-01

    Cocktail parties, busy streets, and other noisy environments pose a difficult challenge to the auditory system: how to focus attention on selected sounds while ignoring others? Neurons of primary auditory cortex, many of which are sharply tuned to sound frequency, could help solve this problem by filtering selected sound information based on frequency-content. To investigate whether this occurs, we used high-resolution fMRI at 7 tesla to map the fine-scale frequency-tuning (1.5 mm isotropic resolution) of primary auditory areas A1 and R in six human participants. Then, in a selective attention experiment, participants heard low (250 Hz)- and high (4000 Hz)-frequency streams of tones presented at the same time (dual-stream) and were instructed to focus attention onto one stream versus the other, switching back and forth every 30 s. Attention to low-frequency tones enhanced neural responses within low-frequency-tuned voxels relative to high, and when attention switched the pattern quickly reversed. Thus, like a radio, human primary auditory cortex is able to tune into attended frequency channels and can switch channels on demand. PMID:23365225

  20. A chaotic stream cipher and the usage in video protection

    International Nuclear Information System (INIS)

    Lian Shiguo; Sun Jinsheng; Wang Jinwei; Wang Zhiquan

    2007-01-01

    In this paper, a chaotic stream cipher is constructed and used to encrypt video data selectively. The stream cipher based on a discrete piecewise linear chaotic map satisfies the security requirement of cipher design. The video encryption scheme based on the stream cipher is secure in perception, efficient and format compliant, which is suitable for practical video protection. The video encryption scheme's performances prove the stream cipher's practicability

  1. Learning-dependent plasticity in human auditory cortex during appetitive operant conditioning.

    Science.gov (United States)

    Puschmann, Sebastian; Brechmann, André; Thiel, Christiane M

    2013-11-01

    Animal experiments provide evidence that learning to associate an auditory stimulus with a reward causes representational changes in auditory cortex. However, most studies did not investigate the temporal formation of learning-dependent plasticity during the task but rather compared auditory cortex receptive fields before and after conditioning. We here present a functional magnetic resonance imaging study on learning-related plasticity in the human auditory cortex during operant appetitive conditioning. Participants had to learn to associate a specific category of frequency-modulated tones with a reward. Only participants who learned this association developed learning-dependent plasticity in left auditory cortex over the course of the experiment. No differential responses to reward predicting and nonreward predicting tones were found in auditory cortex in nonlearners. In addition, learners showed similar learning-induced differential responses to reward-predicting and nonreward-predicting tones in the ventral tegmental area and the nucleus accumbens, two core regions of the dopaminergic neurotransmitter system. This may indicate a dopaminergic influence on the formation of learning-dependent plasticity in auditory cortex, as it has been suggested by previous animal studies. Copyright © 2012 Wiley Periodicals, Inc.

  2. First Branchial Cleft Fistula Associated with External Auditory Canal Stenosis and Middle Ear Cholesteatoma

    Science.gov (United States)

    Abdollahi fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh

    2014-01-01

    Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal. Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. Conclusion: It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear. PMID:25320705

  3. First branchial cleft fistula associated with external auditory canal stenosis and middle ear cholesteatoma.

    Science.gov (United States)

    Abdollahi Fakhim, Shahin; Naderpoor, Masoud; Mousaviagdas, Mehrnoosh

    2014-10-01

    First branchial cleft anomalies manifest with duplication of the external auditory canal. This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection. It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.

  4. The assessment of auditory function in CSWS: lessons from long-term outcome.

    Science.gov (United States)

    Metz-Lutz, Marie-Noëlle

    2009-08-01

    In Landau-Kleffner syndrome (LKS), the prominent and often first symptom is auditory verbal agnosia, which may affect nonverbal sounds. It was early suggested that the subsequent decline of speech expression might result from defective auditory analysis of the patient's own speech. Indeed, despite normal hearing levels, the children behave as if they were deaf, and very rapidly speech expression deteriorates and leads to the receptive aphasia typical of LKS. The association of auditory agnosia more or less restricted to speech with severe language decay prompted numerous studies aimed at specifying the defect in auditory processing and its pathophysiology. Long-term follow-up studies have addressed the issue of the outcome of verbal auditory processing and the development of verbal working memory capacities following the deprivation of phonologic input during the critical period of language development. Based on a review of neurophysiologic and neuropsychological studies of auditory and phonologic disorders published these last 20 years, we discuss the association of verbal agnosia and speech production decay, and try to explain the phonologic working memory deficit in the late outcome of LKS within the Hickok and Poeppel dual-stream model of speech processing.

  5. A Review of Auditory Prediction and Its Potential Role in Tinnitus Perception.

    Science.gov (United States)

    Durai, Mithila; O'Keeffe, Mary G; Searchfield, Grant D

    2018-06-01

    The precise mechanisms underlying tinnitus perception and distress are still not fully understood. A recent proposition is that auditory prediction errors and related memory representations may play a role in driving tinnitus perception. It is of interest to further explore this. To obtain a comprehensive narrative synthesis of current research in relation to auditory prediction and its potential role in tinnitus perception and severity. A narrative review methodological framework was followed. The key words Prediction Auditory, Memory Prediction Auditory, Tinnitus AND Memory, Tinnitus AND Prediction in Article Title, Abstract, and Keywords were extensively searched on four databases: PubMed, Scopus, SpringerLink, and PsychINFO. All study types were selected from 2000-2016 (end of 2016) and had the following exclusion criteria applied: minimum age of participants article not available in English. Reference lists of articles were reviewed to identify any further relevant studies. Articles were short listed based on title relevance. After reading the abstracts and with consensus made between coauthors, a total of 114 studies were selected for charting data. The hierarchical predictive coding model based on the Bayesian brain hypothesis, attentional modulation and top-down feedback serves as the fundamental framework in current literature for how auditory prediction may occur. Predictions are integral to speech and music processing, as well as in sequential processing and identification of auditory objects during auditory streaming. Although deviant responses are observable from middle latency time ranges, the mismatch negativity (MMN) waveform is the most commonly studied electrophysiological index of auditory irregularity detection. However, limitations may apply when interpreting findings because of the debatable origin of the MMN and its restricted ability to model real-life, more complex auditory phenomenon. Cortical oscillatory band activity may act as

  6. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  7. Attentional Modulation of Auditory Steady-State Responses

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex. PMID:25334021

  8. Attentional modulation of auditory steady-state responses.

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  9. Attentional modulation of auditory steady-state responses.

    Directory of Open Access Journals (Sweden)

    Yatin Mahajan

    Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  10. Pre-Attentive Auditory Processing of Lexicality

    Science.gov (United States)

    Jacobsen, Thomas; Horvath, Janos; Schroger, Erich; Lattner, Sonja; Widmann, Andreas; Winkler, Istvan

    2004-01-01

    The effects of lexicality on auditory change detection based on auditory sensory memory representations were investigated by presenting oddball sequences of repeatedly presented stimuli, while participants ignored the auditory stimuli. In a cross-linguistic study of Hungarian and German participants, stimulus sequences were composed of words that…

  11. Feature Assignment in Perception of Auditory Figure

    Science.gov (United States)

    Gregg, Melissa K.; Samuel, Arthur G.

    2012-01-01

    Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory…

  12. Auditory-visual integration in fields of the auditory cortex.

    Science.gov (United States)

    Kubota, Michinori; Sugimoto, Shunji; Hosokawa, Yutaka; Ojima, Hisayuki; Horikawa, Junsei

    2017-03-01

    While multimodal interactions have been known to exist in the early sensory cortices, the response properties and spatiotemporal organization of these interactions are poorly understood. To elucidate the characteristics of multimodal sensory interactions in the cerebral cortex, neuronal responses to visual stimuli with or without auditory stimuli were investigated in core and belt fields of guinea pig auditory cortex using real-time optical imaging with a voltage-sensitive dye. On average, visual responses consisted of short excitation followed by long inhibition. Although visual responses were observed in core and belt fields, there were regional and temporal differences in responses. The most salient visual responses were observed in the caudal belt fields, especially posterior (P) and dorsocaudal belt (DCB) fields. Visual responses emerged first in fields P and DCB and then spread rostroventrally to core and ventrocaudal belt (VCB) fields. Absolute values of positive and negative peak amplitudes of visual responses were both larger in fields P and DCB than in core and VCB fields. When combined visual and auditory stimuli were applied, fields P and DCB were more inhibited than core and VCB fields beginning approximately 110 ms after stimuli. Correspondingly, differences between responses to auditory stimuli alone and combined audiovisual stimuli became larger in fields P and DCB than in core and VCB fields after approximately 110 ms after stimuli. These data indicate that visual influences are most salient in fields P and DCB, which manifest mainly as inhibition, and that they enhance differences in auditory responses among fields. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. StreamCat

    Data.gov (United States)

    U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...

  14. Prioritized Contact Transport Stream

    Science.gov (United States)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  15. Auralization of CFD Vorticity Using an Auditory Illusion

    Science.gov (United States)

    Volpe, C. R.

    2005-12-01

    One way in which scientists and engineers interpret large quantities of data is through a process called visualization, i.e. generating graphical images that capture essential characteristics and highlight interesting relationships. Another approach, which has received far less attention, is to present complex information with sound. This approach, called ``auralization" or ``sonification", is the auditory analog of visualization. Early work in data auralization frequently involved directly mapping some variable in the data to a sound parameter, such as pitch or volume. Multi-variate data could be auralized by mapping several variables to several sound parameters simultaneously. A clear drawback of this approach is the limited practical range of sound parameters that can be presented to human listeners without exceeding their range of perception or comfort. A software auralization system built upon an existing visualization system is briefly described. This system incorporates an aural presentation synchronously and interactively with an animated scientific visualization, so that alternate auralization techniques can be investigated. One such alternate technique involves auditory illusions: sounds which trick the listener into perceiving something other than what is actually being presented. This software system will be used to present an auditory illusion, known for decades among cognitive psychologists, which produces a sound that seems to ascend or descend endlessly in pitch. The applicability of this illusion for presenting Computational Fluid Dynamics data will be demonstrated. CFD data is frequently visualized with thin stream-lines, but thicker stream-ribbons and stream-tubes can also be used, which rotate to convey fluid vorticity. But a purely graphical presentation can yield drawbacks of its own. Thicker stream-tubes can be self-obscuring, and can obscure other scene elements as well, thus motivating a different approach, such as using sound. Naturally

  16. Auditory and audio-visual processing in patients with cochlear, auditory brainstem, and auditory midbrain implants: An EEG study.

    Science.gov (United States)

    Schierholz, Irina; Finke, Mareike; Kral, Andrej; Büchner, Andreas; Rach, Stefan; Lenarz, Thomas; Dengler, Reinhard; Sandmann, Pascale

    2017-04-01

    There is substantial variability in speech recognition ability across patients with cochlear implants (CIs), auditory brainstem implants (ABIs), and auditory midbrain implants (AMIs). To better understand how this variability is related to central processing differences, the current electroencephalography (EEG) study compared hearing abilities and auditory-cortex activation in patients with electrical stimulation at different sites of the auditory pathway. Three different groups of patients with auditory implants (Hannover Medical School; ABI: n = 6, CI: n = 6; AMI: n = 2) performed a speeded response task and a speech recognition test with auditory, visual, and audio-visual stimuli. Behavioral performance and cortical processing of auditory and audio-visual stimuli were compared between groups. ABI and AMI patients showed prolonged response times on auditory and audio-visual stimuli compared with NH listeners and CI patients. This was confirmed by prolonged N1 latencies and reduced N1 amplitudes in ABI and AMI patients. However, patients with central auditory implants showed a remarkable gain in performance when visual and auditory input was combined, in both speech and non-speech conditions, which was reflected by a strong visual modulation of auditory-cortex activation in these individuals. In sum, the results suggest that the behavioral improvement for audio-visual conditions in central auditory implant patients is based on enhanced audio-visual interactions in the auditory cortex. Their findings may provide important implications for the optimization of electrical stimulation and rehabilitation strategies in patients with central auditory prostheses. Hum Brain Mapp 38:2206-2225, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  17. Delayed Auditory Feedback and Movement

    Science.gov (United States)

    Pfordresher, Peter Q.; Dalla Bella, Simone

    2011-01-01

    It is well known that timing of rhythm production is disrupted by delayed auditory feedback (DAF), and that disruption varies with delay length. We tested the hypothesis that disruption depends on the state of the movement trajectory at the onset of DAF. Participants tapped isochronous rhythms at a rate specified by a metronome while hearing DAF…

  18. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  19. Neural correlates of auditory short-term memory in rostral superior temporal cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo

    2014-12-01

    Auditory short-term memory (STM) in the monkey is less robust than visual STM and may depend on a retained sensory trace, which is likely to reside in the higher-order cortical areas of the auditory ventral stream. We recorded from the rostral superior temporal cortex as monkeys performed serial auditory delayed match-to-sample (DMS). A subset of neurons exhibited modulations of their firing rate during the delay between sounds, during the sensory response, or during both. This distributed subpopulation carried a predominantly sensory signal modulated by the mnemonic context of the stimulus. Excitatory and suppressive effects on match responses were dissociable in their timing and in their resistance to sounds intervening between the sample and match. Like the monkeys' behavioral performance, these neuronal effects differ from those reported in the same species during visual DMS, suggesting different neural mechanisms for retaining dynamic sounds and static images in STM. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  1. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  2. Short-term plasticity in auditory cognition.

    Science.gov (United States)

    Jääskeläinen, Iiro P; Ahveninen, Jyrki; Belliveau, John W; Raij, Tommi; Sams, Mikko

    2007-12-01

    Converging lines of evidence suggest that auditory system short-term plasticity can enable several perceptual and cognitive functions that have been previously considered as relatively distinct phenomena. Here we review recent findings suggesting that auditory stimulation, auditory selective attention and cross-modal effects of visual stimulation each cause transient excitatory and (surround) inhibitory modulations in the auditory cortex. These modulations might adaptively tune hierarchically organized sound feature maps of the auditory cortex (e.g. tonotopy), thus filtering relevant sounds during rapidly changing environmental and task demands. This could support auditory sensory memory, pre-attentive detection of sound novelty, enhanced perception during selective attention, influence of visual processing on auditory perception and longer-term plastic changes associated with perceptual learning.

  3. MEKANISME SEGMENTASI LAJU BIT PADA DYNAMIC ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    Muhammad Audy Bazly

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  4. Short-term memory for auditory and visual durations: evidence for selective interference effects.

    Science.gov (United States)

    Rattat, Anne-Claire; Picard, Delphine

    2012-01-01

    The present study sought to determine the format in which visual, auditory and auditory-visual durations ranging from 400 to 600 ms are encoded and maintained in short-term memory, using suppression conditions. Participants compared two stimulus durations separated by an interval of 8 s. During this time, they performed either an articulatory suppression task, a visuospatial tracking task or no specific task at all (control condition). The results showed that the articulatory suppression task decreased recognition performance for auditory durations but not for visual or bimodal ones, whereas the visuospatial task decreased recognition performance for visual durations but not for auditory or bimodal ones. These findings support the modality-specific account of short-term memory for durations.

  5. Benthic invertebrate fauna, small streams

    Science.gov (United States)

    J. Bruce Wallace; S.L. Eggert

    2009-01-01

    Small streams (first- through third-order streams) make up >98% of the total number of stream segments and >86% of stream length in many drainage networks. Small streams occur over a wide array of climates, geology, and biomes, which influence temperature, hydrologic regimes, water chemistry, light, substrate, stream permanence, a basin's terrestrial plant...

  6. Rapid Auditory System Adaptation Using a Virtual Auditory Environment

    Directory of Open Access Journals (Sweden)

    Gaëtan Parseihian

    2011-10-01

    Full Text Available Various studies have highlighted plasticity of the auditory system from visual stimuli, limiting the trained field of perception. The aim of the present study is to investigate auditory system adaptation using an audio-kinesthetic platform. Participants were placed in a Virtual Auditory Environment allowing the association of the physical position of a virtual sound source with an alternate set of acoustic spectral cues or Head-Related Transfer Function (HRTF through the use of a tracked ball manipulated by the subject. This set-up has the advantage to be not being limited to the visual field while also offering a natural perception-action coupling through the constant awareness of one's hand position. Adaptation process to non-individualized HRTF was realized through a spatial search game application. A total of 25 subjects participated, consisting of subjects presented with modified cues using non-individualized HRTF and a control group using individual measured HRTFs to account for any learning effect due to the game itself. The training game lasted 12 minutes and was repeated over 3 consecutive days. Adaptation effects were measured with repeated localization tests. Results showed a significant performance improvement for vertical localization and a significant reduction in the front/back confusion rate after 3 sessions.

  7. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  8. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  9. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  10. The selective processing of emotional visual stimuli while detecting auditory targets: an ERP analysis.

    Science.gov (United States)

    Schupp, Harald T; Stockburger, Jessica; Bublatzky, Florian; Junghöfer, Markus; Weike, Almut I; Hamm, Alfons O

    2008-09-16

    Event-related potential studies revealed an early posterior negativity (EPN) for emotional compared to neutral pictures. Exploring the emotion-attention relationship, a previous study observed that a primary visual discrimination task interfered with the emotional modulation of the EPN component. To specify the locus of interference, the present study assessed the fate of selective visual emotion processing while attention is directed towards the auditory modality. While simply viewing a rapid and continuous stream of pleasant, neutral, and unpleasant pictures in one experimental condition, processing demands of a concurrent auditory target discrimination task were systematically varied in three further experimental conditions. Participants successfully performed the auditory task as revealed by behavioral performance and selected event-related potential components. Replicating previous results, emotional pictures were associated with a larger posterior negativity compared to neutral pictures. Of main interest, increasing demands of the auditory task did not modulate the selective processing of emotional visual stimuli. With regard to the locus of interference, selective emotion processing as indexed by the EPN does not seem to reflect shared processing resources of visual and auditory modality.

  11. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  12. Effect of delayed auditory feedback on stuttering with and without central auditory processing disorders.

    Science.gov (United States)

    Picoloto, Luana Altran; Cardoso, Ana Cláudia Vieira; Cerqueira, Amanda Venuti; Oliveira, Cristiane Moço Canhetti de

    2017-12-07

    To verify the effect of delayed auditory feedback on speech fluency of individuals who stutter with and without central auditory processing disorders. The participants were twenty individuals with stuttering from 7 to 17 years old and were divided into two groups: Stuttering Group with Auditory Processing Disorders (SGAPD): 10 individuals with central auditory processing disorders, and Stuttering Group (SG): 10 individuals without central auditory processing disorders. Procedures were: fluency assessment with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF), assessment of the stuttering severity and central auditory processing (CAP). Phono Tools software was used to cause a delay of 100 milliseconds in the auditory feedback. The "Wilcoxon Signal Post" test was used in the intragroup analysis and "Mann-Whitney" test in the intergroup analysis. The DAF caused a statistically significant reduction in SG: in the frequency score of stuttering-like disfluencies in the analysis of the Stuttering Severity Instrument, in the amount of blocks and repetitions of monosyllabic words, and in the frequency of stuttering-like disfluencies of duration. Delayed auditory feedback did not cause statistically significant effects on SGAPD fluency, individuals with stuttering with auditory processing disorders. The effect of delayed auditory feedback in speech fluency of individuals who stutter was different in individuals of both groups, because there was an improvement in fluency only in individuals without auditory processing disorder.

  13. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  14. Solar wind stream interfaces

    International Nuclear Information System (INIS)

    Gosling, J.T.; Asbridge, J.R.; Bame, S.J.; Feldman, W.C.

    1978-01-01

    Measurements aboard Imp 6, 7, and 8 reveal that approximately one third of all high-speed solar wind streams observed at 1 AU contain a sharp boundary (of thickness less than approx.4 x 10 4 km) near their leading edge, called a stream interface, which separates plasma of distinctly different properties and origins. Identified as discontinuities across which the density drops abruptly, the proton temperature increases abruptly, and the speed rises, stream interfaces are remarkably similar in character from one stream to the next. A superposed epoch analysis of plasma data has been performed for 23 discontinuous stream interfaces observed during the interval March 1971 through August 1974. Among the results of this analysis are the following: (1) a stream interface separates what was originally thick (i.e., dense) slow gas from what was originally thin (i.e., rare) fast gas; (2) the interface is the site of a discontinuous shear in the solar wind flow in a frame of reference corotating with the sun; (3) stream interfaces occur at speeds less than 450 km s - 1 and close to or at the maximum of the pressure ridge at the leading edges of high-speed streams; (4) a discontinuous rise by approx.40% in electron temperature occurs at the interface; and (5) discontinuous changes (usually rises) in alpha particle abundance and flow speed relative to the protons occur at the interface. Stream interfaces do not generally recur on successive solar rotations, even though the streams in which they are embedded often do. At distances beyond several astronomical units, stream interfaces should be bounded by forward-reverse shock pairs; three of four reverse shocks observed at 1 AU during 1971--1974 were preceded within approx.1 day by stream interfaces. Our observations suggest that many streams close to the sun are bounded on all sides by large radial velocity shears separating rapidly expanding plasma from more slowly expanding plasma

  15. Laterality of basic auditory perception.

    Science.gov (United States)

    Sininger, Yvonne S; Bhatara, Anjali

    2012-01-01

    Laterality (left-right ear differences) of auditory processing was assessed using basic auditory skills: (1) gap detection, (2) frequency discrimination, and (3) intensity discrimination. Stimuli included tones (500, 1000, and 4000 Hz) and wide-band noise presented monaurally to each ear of typical adult listeners. The hypothesis tested was that processing of tonal stimuli would be enhanced by left ear (LE) stimulation and noise by right ear (RE) presentations. To investigate the limits of laterality by (1) spectral width, a narrow-band noise (NBN) of 450-Hz bandwidth was evaluated using intensity discrimination, and (2) stimulus duration, 200, 500, and 1000 ms duration tones were evaluated using frequency discrimination. A left ear advantage (LEA) was demonstrated with tonal stimuli in all experiments, but an expected REA for noise stimuli was not found. The NBN stimulus demonstrated no LEA and was characterised as a noise. No change in laterality was found with changes in stimulus durations. The LEA for tonal stimuli is felt to be due to more direct connections between the left ear and the right auditory cortex, which has been shown to be primary for spectral analysis and tonal processing. The lack of a REA for noise stimuli is unexplained. Sex differences in laterality for noise stimuli were noted but were not statistically significant. This study did establish a subtle but clear pattern of LEA for processing of tonal stimuli.

  16. Auditory Motion Elicits a Visual Motion Aftereffect

    Directory of Open Access Journals (Sweden)

    Christopher C. Berger

    2016-12-01

    Full Text Available The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect—an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  17. Auditory Motion Elicits a Visual Motion Aftereffect.

    Science.gov (United States)

    Berger, Christopher C; Ehrsson, H Henrik

    2016-01-01

    The visual motion aftereffect is a visual illusion in which exposure to continuous motion in one direction leads to a subsequent illusion of visual motion in the opposite direction. Previous findings have been mixed with regard to whether this visual illusion can be induced cross-modally by auditory stimuli. Based on research on multisensory perception demonstrating the profound influence auditory perception can have on the interpretation and perceived motion of visual stimuli, we hypothesized that exposure to auditory stimuli with strong directional motion cues should induce a visual motion aftereffect. Here, we demonstrate that horizontally moving auditory stimuli induced a significant visual motion aftereffect-an effect that was driven primarily by a change in visual motion perception following exposure to leftward moving auditory stimuli. This finding is consistent with the notion that visual and auditory motion perception rely on at least partially overlapping neural substrates.

  18. Acoustic streaming of a sharp edge.

    Science.gov (United States)

    Ovchinnikov, Mikhail; Zhou, Jianbo; Yalamanchili, Satish

    2014-07-01

    Anomalous acoustic streaming is observed emanating from sharp edges of solid bodies that are vibrating in fluids. The streaming velocities can be orders of magnitude higher than expected from the Rayleigh streaming at similar amplitudes of vibration. Acoustic velocity of fluid relative to a solid body diverges at a sharp edge, giving rise to a localized time-independent body force acting on the fluid. This force results in a formation of a localized jet. Two-dimensional numerical simulations are performed to predict acoustic streaming for low amplitude vibration using two methods: (1) Steady-state solution utilizing perturbation theory and (2) direct transient solution of the Navier-Stokes equations. Both analyses agree with each other and correctly predict the streaming of a sharp-edged vibrating blade measured experimentally. The origin of the streaming can be attributed to the centrifugal force of the acoustic fluid flow around a sharp edge. The dependence of this acoustic streaming on frequency and velocity is examined using dimensional analysis. The dependence law is devised and confirmed by numerical simulations.

  19. Inventory of miscellaneous streams

    International Nuclear Information System (INIS)

    Lueck, K.J.

    1995-09-01

    On December 23, 1991, the US Department of Energy, Richland Operations Office (RL) and the Washington State Department of Ecology (Ecology) agreed to adhere to the provisions of the Department of Ecology Consent Order. The Consent Order lists the regulatory milestones for liquid effluent streams at the Hanford Site to comply with the permitting requirements of Washington Administrative Code. The RL provided the US Congress a Plan and Schedule to discontinue disposal of contaminated liquid effluent into the soil column on the Hanford Site. The plan and schedule document contained a strategy for the implementation of alternative treatment and disposal systems. This strategy included prioritizing the streams into two phases. The Phase 1 streams were considered to be higher priority than the Phase 2 streams. The actions recommended for the Phase 1 and 2 streams in the two reports were incorporated in the Hanford Federal Facility Agreement and Consent Order. Miscellaneous Streams are those liquid effluents streams identified within the Consent Order that are discharged to the ground but are not categorized as Phase 1 or Phase 2 Streams. This document consists of an inventory of the liquid effluent streams being discharged into the Hanford soil column

  20. Hydrography - Streams and Shorelines

    Data.gov (United States)

    California Natural Resource Agency — The hydrography layer consists of flowing waters (rivers and streams), standing waters (lakes and ponds), and wetlands -- both natural and manmade. Two separate...

  1. Research Progresses of Halo Streams in the Solar Neighborhood

    Science.gov (United States)

    Xi-long, Liang; Jing-kun, Zhao; Yu-qin, Chen; Gang, Zhao

    2018-01-01

    The stellar streams originated from the Galactic halo may be detected when they pass by the solar neighborhood, and they still keep some information at their birth times. Thus, the investigation of halo streams in the solar neighborhood is very important for understanding the formation and evolution of our Galaxy. In this paper, the researches of halo streams in the solar neighborhood are briefly reviewed. We have introduced the methods how to detect the halo streams and identify their member stars, summarized the progresses in the observation of member stars of halo streams and in the study of their origins, introduced in detail how to analyze the origins of halo streams in the solar neighborhood by means of numerical simulation and chemical abundance, and finally discussed the prospects of the LAMOST and GAIA in the research of halo streams in the solar neighborhood.

  2. Functional mapping of the primate auditory system.

    Science.gov (United States)

    Poremba, Amy; Saunders, Richard C; Crane, Alison M; Cook, Michelle; Sokoloff, Louis; Mishkin, Mortimer

    2003-01-24

    Cerebral auditory areas were delineated in the awake, passively listening, rhesus monkey by comparing the rates of glucose utilization in an intact hemisphere and in an acoustically isolated contralateral hemisphere of the same animal. The auditory system defined in this way occupied large portions of cerebral tissue, an extent probably second only to that of the visual system. Cortically, the activated areas included the entire superior temporal gyrus and large portions of the parietal, prefrontal, and limbic lobes. Several auditory areas overlapped with previously identified visual areas, suggesting that the auditory system, like the visual system, contains separate pathways for processing stimulus quality, location, and motion.

  3. Auditory Modeling for Noisy Speech Recognition

    National Research Council Canada - National Science Library

    2000-01-01

    ... digital filtering for noise cancellation which interfaces to speech recognition software. It uses auditory features in speech recognition training, and provides applications to multilingual spoken language translation...

  4. Auditory prediction during speaking and listening.

    Science.gov (United States)

    Sato, Marc; Shiller, Douglas M

    2018-02-02

    In the present EEG study, the role of auditory prediction in speech was explored through the comparison of auditory cortical responses during active speaking and passive listening to the same acoustic speech signals. Two manipulations of sensory prediction accuracy were used during the speaking task: (1) a real-time change in vowel F1 feedback (reducing prediction accuracy relative to unaltered feedback) and (2) presenting a stable auditory target rather than a visual cue to speak (enhancing auditory prediction accuracy during baseline productions, and potentially enhancing the perturbing effect of altered feedback). While subjects compensated for the F1 manipulation, no difference between the auditory-cue and visual-cue conditions were found. Under visually-cued conditions, reduced N1/P2 amplitude was observed during speaking vs. listening, reflecting a motor-to-sensory prediction. In addition, a significant correlation was observed between the magnitude of behavioral compensatory F1 response and the magnitude of this speaking induced suppression (SIS) for P2 during the altered auditory feedback phase, where a stronger compensatory decrease in F1 was associated with a stronger the SIS effect. Finally, under the auditory-cued condition, an auditory repetition-suppression effect was observed in N1/P2 amplitude during the listening task but not active speaking, suggesting that auditory predictive processes during speaking and passive listening are functionally distinct. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  6. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  8. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  9. A right-ear bias of auditory selective attention is evident in alpha oscillations.

    Science.gov (United States)

    Payne, Lisa; Rogers, Chad S; Wingfield, Arthur; Sekuler, Robert

    2017-04-01

    Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening. © 2016 Society for Psychophysiological Research.

  10. Auditory distraction and serial memory: The avoidable and the ineluctable

    Directory of Open Access Journals (Sweden)

    Dylan M Jones

    2010-01-01

    Full Text Available One mental activity that is very vulnerable to auditory distraction is serial recall. This review of the contemporary findings relating to serial recall charts the key determinants of distraction. It is evident that there is one form of distraction that is a joint product of the cognitive characteristics of the task and of the obligatory cognitive processing of the sound. For sequences of sound, distraction appears to be an ineluctable product of similarity-of-process, specifically, the serial order processing of the visually presented items and the serial order coding that is the by-product of the streaming of the sound. However, recently emerging work shows that the distraction from a single sound (one deviating from a prevailing sequence results in attentional capture and is qualitatively distinct from that of a sequence in being restricted in its action to encoding, not to rehearsal of list members. Capture is also sensitive to the sensory task load, suggesting that it is subject to top-down control and therefore avoidable. These two forms of distraction-conflict of process and attentional capture-may be two consequences of auditory perceptual organization processes that serve to strike the optimal balance between attentional selectivity and distractability.

  11. Narrow, duplicated internal auditory canal

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, T. [Servico de Neurorradiologia, Hospital Garcia de Orta, Avenida Torrado da Silva, 2801-951, Almada (Portugal); Shayestehfar, B. [Department of Radiology, UCLA Oliveview School of Medicine, Los Angeles, California (United States); Lufkin, R. [Department of Radiology, UCLA School of Medicine, Los Angeles, California (United States)

    2003-05-01

    A narrow internal auditory canal (IAC) constitutes a relative contraindication to cochlear implantation because it is associated with aplasia or hypoplasia of the vestibulocochlear nerve or its cochlear branch. We report an unusual case of a narrow, duplicated IAC, divided by a bony septum into a superior relatively large portion and an inferior stenotic portion, in which we could identify only the facial nerve. This case adds support to the association between a narrow IAC and aplasia or hypoplasia of the vestibulocochlear nerve. The normal facial nerve argues against the hypothesis that the narrow IAC is the result of a primary bony defect which inhibits the growth of the vestibulocochlear nerve. (orig.)

  12. Effectiveness of auditory and tactile crossmodal cues in a dual-task visual and auditory scenario.

    Science.gov (United States)

    Hopkins, Kevin; Kass, Steven J; Blalock, Lisa Durrance; Brill, J Christopher

    2017-05-01

    In this study, we examined how spatially informative auditory and tactile cues affected participants' performance on a visual search task while they simultaneously performed a secondary auditory task. Visual search task performance was assessed via reaction time and accuracy. Tactile and auditory cues provided the approximate location of the visual target within the search display. The inclusion of tactile and auditory cues improved performance in comparison to the no-cue baseline conditions. In comparison to the no-cue conditions, both tactile and auditory cues resulted in faster response times in the visual search only (single task) and visual-auditory (dual-task) conditions. However, the effectiveness of auditory and tactile cueing for visual task accuracy was shown to be dependent on task-type condition. Crossmodal cueing remains a viable strategy for improving task performance without increasing attentional load within a singular sensory modality. Practitioner Summary: Crossmodal cueing with dual-task performance has not been widely explored, yet has practical applications. We examined the effects of auditory and tactile crossmodal cues on visual search performance, with and without a secondary auditory task. Tactile cues aided visual search accuracy when also engaged in a secondary auditory task, whereas auditory cues did not.

  13. Asteroid/meteorite streams

    Science.gov (United States)

    Drummond, J.

    The independent discovery of the same three streams (named alpha, beta, and gamma) among 139 Earth approaching asteroids and among 89 meteorite producing fireballs presents the possibility of matching specific meteorites to specific asteroids, or at least to asteroids in the same stream and, therefore, presumably of the same composition. Although perhaps of limited practical value, the three meteorites with known orbits are all ordinary chondrites. To identify, in general, the taxonomic type of the parent asteroid, however, would be of great scientific interest since these most abundant meteorite types cannot be unambiguously spectrally matched to an asteroid type. The H5 Pribram meteorite and asteroid 4486 (unclassified) are not part of a stream, but travel in fairly similar orbits. The LL5 Innisfree meteorite is orbitally similar to asteroid 1989DA (unclassified), and both are members of a fourth stream (delta) defined by five meteorite-dropping fireballs and this one asteroid. The H5 Lost City meteorite is orbitally similar to 1980AA (S type), which is a member of stream gamma defined by four asteroids and four fireballs. Another asteroid in this stream is classified as an S type, another is QU, and the fourth is unclassified. This stream suggests that ordinary chondrites should be associated with S (and/or Q) asteroids. Two of the known four V type asteroids belong to another stream, beta, defined by five asteroids and four meteorite-dropping (but unrecovered) fireballs, making it the most probable source of the eucrites. The final stream, alpha, defined by five asteroids and three fireballs is of unknown composition since no meteorites have been recovered and only one asteroid has an ambiguous classification of QRS. If this stream, or any other as yet undiscovered ones, were found to be composed of a more practical material (e.g., water or metalrich), then recovery of the associated meteorites would provide an opportunity for in-hand analysis of a potential

  14. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  15. Auditory and visual evoked potentials during hyperoxia

    Science.gov (United States)

    Smith, D. B. D.; Strawbridge, P. J.

    1974-01-01

    Experimental study of the auditory and visual averaged evoked potentials (AEPs) recorded during hyperoxia, and investigation of the effect of hyperoxia on the so-called contingent negative variation (CNV). No effect of hyperoxia was found on the auditory AEP, the visual AEP, or the CNV. Comparisons with previous studies are discussed.

  16. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  17. Bilateral duplication of the internal auditory canal

    International Nuclear Information System (INIS)

    Weon, Young Cheol; Kim, Jae Hyoung; Choi, Sung Kyu; Koo, Ja-Won

    2007-01-01

    Duplication of the internal auditory canal is an extremely rare temporal bone anomaly that is believed to result from aplasia or hypoplasia of the vestibulocochlear nerve. We report bilateral duplication of the internal auditory canal in a 28-month-old boy with developmental delay and sensorineural hearing loss. (orig.)

  18. Neural circuits in auditory and audiovisual memory.

    Science.gov (United States)

    Plakke, B; Romanski, L M

    2016-06-01

    Working memory is the ability to employ recently seen or heard stimuli and apply them to changing cognitive context. Although much is known about language processing and visual working memory, the neurobiological basis of auditory working memory is less clear. Historically, part of the problem has been the difficulty in obtaining a robust animal model to study auditory short-term memory. In recent years there has been neurophysiological and lesion studies indicating a cortical network involving both temporal and frontal cortices. Studies specifically targeting the role of the prefrontal cortex (PFC) in auditory working memory have suggested that dorsal and ventral prefrontal regions perform different roles during the processing of auditory mnemonic information, with the dorsolateral PFC performing similar functions for both auditory and visual working memory. In contrast, the ventrolateral PFC (VLPFC), which contains cells that respond robustly to auditory stimuli and that process both face and vocal stimuli may be an essential locus for both auditory and audiovisual working memory. These findings suggest a critical role for the VLPFC in the processing, integrating, and retaining of communication information. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  20. Streaming Potential and Electroosmosis Measurements to Characterize Porous Materials

    NARCIS (Netherlands)

    Luong, D.T.; Sprik, R.

    2013-01-01

    Characterizing the streaming potential and electroosmosis properties of porous media is essential in applying seismoelectric and electroseismic phenomena for oil exploration. Some parameters such as porosity, permeability, formation factor, pore size, the number of pores, and the zeta potential of

  1. Percent Forest Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  2. Percent Agriculture Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  3. Temporal auditory processing in elders

    Directory of Open Access Journals (Sweden)

    Azzolini, Vanuza Conceição

    2010-03-01

    Full Text Available Introduction: In the trial of aging all the structures of the organism are modified, generating intercurrences in the quality of the hearing and of the comprehension. The hearing loss that occurs in consequence of this trial occasion a reduction of the communicative function, causing, also, a distance of the social relationship. Objective: Comparing the performance of the temporal auditory processing between elderly individuals with and without hearing loss. Method: The present study is characterized for to be a prospective, transversal and of diagnosis character field work. They were analyzed 21 elders (16 women and 5 men, with ages between 60 to 81 years divided in two groups, a group "without hearing loss"; (n = 13 with normal auditive thresholds or restricted hearing loss to the isolated frequencies and a group "with hearing loss" (n = 8 with neurosensory hearing loss of variable degree between light to moderately severe. Both the groups performed the tests of frequency (PPS and duration (DPS, for evaluate the ability of temporal sequencing, and the test Randon Gap Detection Test (RGDT, for evaluate the temporal resolution ability. Results: It had not difference statistically significant between the groups, evaluated by the tests DPS and RGDT. The ability of temporal sequencing was significantly major in the group without hearing loss, when evaluated by the test PPS in the condition "muttering". This result presented a growing one significant in parallel with the increase of the age group. Conclusion: It had not difference in the temporal auditory processing in the comparison between the groups.

  4. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  5. Tactile feedback improves auditory spatial localization

    Directory of Open Access Journals (Sweden)

    Monica eGori

    2014-10-01

    Full Text Available Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial-bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014. To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile-feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject’s forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal-feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no-feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially coherent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  6. Wadeable Streams Assessment Data

    Science.gov (United States)

    The Wadeable Streams Assessment (WSA) is a first-ever statistically-valid survey of the biological condition of small streams throughout the U.S. The U.S. Environmental Protection Agency (EPA) worked with the states to conduct the assessment in 2004-2005. Data for each parameter sampled in the Wadeable Streams Assessment (WSA) are available for downloading in a series of files as comma separated values (*.csv). Each *.csv data file has a companion text file (*.txt) that lists a dataset label and individual descriptions for each variable. Users should view the *.txt files first to help guide their understanding and use of the data.

  7. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  8. Auditory Multi-Stability: Idiosyncratic Perceptual Switching Patterns, Executive Functions and Personality Traits.

    Directory of Open Access Journals (Sweden)

    Dávid Farkas

    Full Text Available Multi-stability refers to the phenomenon of perception stochastically switching between possible interpretations of an unchanging stimulus. Despite considerable variability, individuals show stable idiosyncratic patterns of switching between alternative perceptions in the auditory streaming paradigm. We explored correlates of the individual switching patterns with executive functions, personality traits, and creativity. The main dimensions on which individual switching patterns differed from each other were identified using multidimensional scaling. Individuals with high scores on the dimension explaining the largest portion of the inter-individual variance switched more often between the alternative perceptions than those with low scores. They also perceived the most unusual interpretation more often, and experienced all perceptual alternatives with a shorter delay from stimulus onset. The ego-resiliency personality trait, which reflects a tendency for adaptive flexibility and experience seeking, was significantly positively related to this dimension. Taking these results together we suggest that this dimension may reflect the individual's tendency for exploring the auditory environment. Executive functions were significantly related to some of the variables describing global properties of the switching patterns, such as the average number of switches. Thus individual patterns of perceptual switching in the auditory streaming paradigm are related to some personality traits and executive functions.

  9. Auditory midbrain processing is differentially modulated by auditory and visual cortices: An auditory fMRI study.

    Science.gov (United States)

    Gao, Patrick P; Zhang, Jevin W; Fan, Shu-Juan; Sanes, Dan H; Wu, Ed X

    2015-12-01

    The cortex contains extensive descending projections, yet the impact of cortical input on brainstem processing remains poorly understood. In the central auditory system, the auditory cortex contains direct and indirect pathways (via brainstem cholinergic cells) to nuclei of the auditory midbrain, called the inferior colliculus (IC). While these projections modulate auditory processing throughout the IC, single neuron recordings have samples from only a small fraction of cells during stimulation of the corticofugal pathway. Furthermore, assessments of cortical feedback have not been extended to sensory modalities other than audition. To address these issues, we devised blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) paradigms to measure the sound-evoked responses throughout the rat IC and investigated the effects of bilateral ablation of either auditory or visual cortices. Auditory cortex ablation increased the gain of IC responses to noise stimuli (primarily in the central nucleus of the IC) and decreased response selectivity to forward species-specific vocalizations (versus temporally reversed ones, most prominently in the external cortex of the IC). In contrast, visual cortex ablation decreased the gain and induced a much smaller effect on response selectivity. The results suggest that auditory cortical projections normally exert a large-scale and net suppressive influence on specific IC subnuclei, while visual cortical projections provide a facilitatory influence. Meanwhile, auditory cortical projections enhance the midbrain response selectivity to species-specific vocalizations. We also probed the role of the indirect cholinergic projections in the auditory system in the descending modulation process by pharmacologically blocking muscarinic cholinergic receptors. This manipulation did not affect the gain of IC responses but significantly reduced the response selectivity to vocalizations. The results imply that auditory cortical

  10. The absence of an auditory-visual attentional blink is not due to echoic memory.

    Science.gov (United States)

    Van der Burg, Erik; Olivers, Christian N; Bronkhorst, Adelbei W; Koelewijn, Thomas; Theeuwes, Jan

    2007-10-01

    The second of two targets is often missed when presented shortly after the first target--a phenomenon referred to as the attentional blink (AB). Whereas the AB is a robust phenomenon within sensory modalities, the evidence for cross-modal ABs is rather mixed. Here, we test the possibility that the absence of an auditory-visual AB for visual letter recognition when streams of tones are used is due to the efficient use of echoic memory, allowing for the postponement of auditory processing. However, forcing participants to immediately process the auditory target, either by presenting interfering sounds during retrieval or by making the first target directly relevant for a speeded response to the second target, did not result in a return of a cross-modal AB. Thefindings argue against echoic memory as an explanation for efficient cross-modal processing. Instead, we hypothesized that a cross-modal AB may be observed when the different modalities use common representations, such as semantic representations. In support of this, a deficit for visual letter recognition returned when the auditory task required a distinction between spoken digits and letters.

  11. Populations of auditory cortical neurons can accurately encode acoustic space across stimulus intensity.

    Science.gov (United States)

    Miller, Lee M; Recanzone, Gregg H

    2009-04-07

    The auditory cortex is critical for perceiving a sound's location. However, there is no topographic representation of acoustic space, and individual auditory cortical neurons are often broadly tuned to stimulus location. It thus remains unclear how acoustic space is represented in the mammalian cerebral cortex and how it could contribute to sound localization. This report tests whether the firing rates of populations of neurons in different auditory cortical fields in the macaque monkey carry sufficient information to account for horizontal sound localization ability. We applied an optimal neural decoding technique, based on maximum likelihood estimation, to populations of neurons from 6 different cortical fields encompassing core and belt areas. We found that the firing rate of neurons in the caudolateral area contain enough information to account for sound localization ability, but neurons in other tested core and belt cortical areas do not. These results provide a detailed and plausible population model of how acoustic space could be represented in the primate cerebral cortex and support a dual stream processing model of auditory cortical processing.

  12. Attention, memory, and auditory processing in 10- to 15-year-old children with listening difficulties.

    Science.gov (United States)

    Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon

    2014-12-01

    The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.

  13. The auditory scene: an fMRI study on melody and accompaniment in professional pianists.

    Science.gov (United States)

    Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela

    2014-11-15

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Future Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  15. Channelized Streams in Iowa

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This draft dataset consists of all ditches or channelized pieces of stream that could be identified using three input datasets; namely the1:24,000 National...

  16. Stochastic ice stream dynamics.

    Science.gov (United States)

    Mantelli, Elisa; Bertagni, Matteo Bernard; Ridolfi, Luca

    2016-08-09

    Ice streams are narrow corridors of fast-flowing ice that constitute the arterial drainage network of ice sheets. Therefore, changes in ice stream flow are key to understanding paleoclimate, sea level changes, and rapid disintegration of ice sheets during deglaciation. The dynamics of ice flow are tightly coupled to the climate system through atmospheric temperature and snow recharge, which are known exhibit stochastic variability. Here we focus on the interplay between stochastic climate forcing and ice stream temporal dynamics. Our work demonstrates that realistic climate fluctuations are able to (i) induce the coexistence of dynamic behaviors that would be incompatible in a purely deterministic system and (ii) drive ice stream flow away from the regime expected in a steady climate. We conclude that environmental noise appears to be crucial to interpreting the past behavior of ice sheets, as well as to predicting their future evolution.

  17. Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  18. Streaming tearing mode

    Science.gov (United States)

    Shigeta, M.; Sato, T.; Dasgupta, B.

    1985-01-01

    The magnetohydrodynamic stability of streaming tearing mode is investigated numerically. A bulk plasma flow parallel to the antiparallel magnetic field lines and localized in the neutral sheet excites a streaming tearing mode more strongly than the usual tearing mode, particularly for the wavelength of the order of the neutral sheet width (or smaller), which is stable for the usual tearing mode. Interestingly, examination of the eigenfunctions of the velocity perturbation and the magnetic field perturbation indicates that the streaming tearing mode carries more energy in terms of the kinetic energy rather than the magnetic energy. This suggests that the streaming tearing mode instability can be a more feasible mechanism of plasma acceleration than the usual tearing mode instability.

  19. DNR 24K Streams

    Data.gov (United States)

    Minnesota Department of Natural Resources — 1:24,000 scale streams captured from USGS seven and one-half minute quadrangle maps, with perennial vs. intermittent classification, and connectivity through lakes,...

  20. Trout Stream Special Regulations

    Data.gov (United States)

    Minnesota Department of Natural Resources — This layer shows Minnesota trout streams that have a special regulation as described in the 2006 Minnesota Fishing Regulations. Road crossings were determined using...

  1. Scientific stream pollution analysis

    National Research Council Canada - National Science Library

    Nemerow, Nelson Leonard

    1974-01-01

    A comprehensive description of the analysis of water pollution that presents a careful balance of the biological,hydrological, chemical and mathematical concepts involved in the evaluation of stream...

  2. Collaborative Media Streaming

    OpenAIRE

    Kahmann, Verena

    2008-01-01

    Mit Hilfe der IP-Technologie erbrachte Multimedia-Dienste wie IPTV oder Video-on-Demand sind zur Zeit ein gefragtes Thema. Technisch werden solche Dienste unter dem Begriff "Streaming" eingeordnet. Ein Server sendet Mediendaten kontinuierlich an Empfänger, welche die Daten sofort weiterverarbeiten und anzeigen. Über einen Rückkanal hat der Kunde die Möglichkeit der Einflussnahme auf die Wiedergabe. Eine Weiterentwicklung dieser Streaming-Dienste ist die Möglichkeit, gemeinsam mit anderen dens...

  3. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  4. Auditory attention activates peripheral visual cortex.

    Directory of Open Access Journals (Sweden)

    Anthony D Cate

    Full Text Available BACKGROUND: Recent neuroimaging studies have revealed that putatively unimodal regions of visual cortex can be activated during auditory tasks in sighted as well as in blind subjects. However, the task determinants and functional significance of auditory occipital activations (AOAs remains unclear. METHODOLOGY/PRINCIPAL FINDINGS: We examined AOAs in an intermodal selective attention task to distinguish whether they were stimulus-bound or recruited by higher-level cognitive operations associated with auditory attention. Cortical surface mapping showed that auditory occipital activations were localized to retinotopic visual cortex subserving the far peripheral visual field. AOAs depended strictly on the sustained engagement of auditory attention and were enhanced in more difficult listening conditions. In contrast, unattended sounds produced no AOAs regardless of their intensity, spatial location, or frequency. CONCLUSIONS/SIGNIFICANCE: Auditory attention, but not passive exposure to sounds, routinely activated peripheral regions of visual cortex when subjects attended to sound sources outside the visual field. Functional connections between auditory cortex and visual cortex subserving the peripheral visual field appear to underlie the generation of AOAs, which may reflect the priming of visual regions to process soon-to-appear objects associated with unseen sound sources.

  5. Auditory conflict and congruence in frontotemporal dementia.

    Science.gov (United States)

    Clark, Camilla N; Nicholas, Jennifer M; Agustus, Jennifer L; Hardy, Christopher J D; Russell, Lucy L; Brotherhood, Emilie V; Dick, Katrina M; Marshall, Charles R; Mummery, Catherine J; Rohrer, Jonathan D; Warren, Jason D

    2017-09-01

    Impaired analysis of signal conflict and congruence may contribute to diverse socio-emotional symptoms in frontotemporal dementias, however the underlying mechanisms have not been defined. Here we addressed this issue in patients with behavioural variant frontotemporal dementia (bvFTD; n = 19) and semantic dementia (SD; n = 10) relative to healthy older individuals (n = 20). We created auditory scenes in which semantic and emotional congruity of constituent sounds were independently probed; associated tasks controlled for auditory perceptual similarity, scene parsing and semantic competence. Neuroanatomical correlates of auditory congruity processing were assessed using voxel-based morphometry. Relative to healthy controls, both the bvFTD and SD groups had impaired semantic and emotional congruity processing (after taking auditory control task performance into account) and reduced affective integration of sounds into scenes. Grey matter correlates of auditory semantic congruity processing were identified in distributed regions encompassing prefrontal, parieto-temporal and insular areas and correlates of auditory emotional congruity in partly overlapping temporal, insular and striatal regions. Our findings suggest that decoding of auditory signal relatedness may probe a generic cognitive mechanism and neural architecture underpinning frontotemporal dementia syndromes. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  6. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    Science.gov (United States)

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  7. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2014-12-01

    Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  8. Alignment data streams for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B; Amorim, A; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; Garcia, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment constants. This option was selected because it makes use of the Byte Stream Converter infrastructure and possibly gives better bandwidth usage and storage optimization. Processing is done using specialized algorithms running in the Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. This work is addressing in particular the alignment requirements, the needs for track and hit selection, and the performance issues

  9. Alignment data stream for the ATLAS inner detector

    International Nuclear Information System (INIS)

    Pinto, B

    2010-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to achieve the necessary Event Filter rate output, making possible to optimize the storage and processing needs of these data. These needs are described in the ATLAS Computing Model, which embraces Grid concepts. The output coming from the Event Filter will consist of three main streams: a primary stream, the express stream and the calibration stream. The calibration stream will be transferred to the Tier-0 facilities which will allow the prompt reconstruction of this stream with an admissible latency of 8 hours, producing calibration constants of sufficient quality to permit a first-pass processing. An independent calibration stream is developed and tested, which selects tracks at the level-2 trigger (LVL2) after the reconstruction. The stream is composed of raw data, in byte-stream format, and contains only information of the relevant parts of the detector, in particular the hit information of the selected tracks. This leads to a significantly improved bandwidth usage and storage capability. The stream will be used to derive and update the calibration and alignment constants if necessary every 24h. Processing is done using specialized algorithms running in Athena framework in dedicated Tier-0 resources, and the alignment constants will be stored and distributed using the COOL conditions database infrastructure. The work is addressing in particular the alignment requirements, the needs for track and hit selection, timing and bandwidth issues.

  10. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  11. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  12. Exploration of auditory P50 gating in schizophrenia by way of difference waves

    DEFF Research Database (Denmark)

    Arnfred, Sidse M

    2006-01-01

    potentials but here this method along with low frequency filtering is applied exploratory on auditory P50 gating data, previously analyzed in the standard format (reported in Am J Psychiatry 2003, 160:2236-8). The exploration was motivated by the observation during visual peak detection that the AEP waveform......Electroencephalographic measures of information processing encompass both mid-latency evoked potentials like the pre-attentive auditory P50 potential and a host of later more cognitive components like P300 and N400.Difference waves have mostly been employed in studies of later event related...

  13. Streaming Pool: reuse, combine and create reactive streams with pleasure

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    When connecting together heterogeneous and complex systems, it is not easy to exchange data between components. Streams of data are successfully used in industry in order to overcome this problem, especially in the case of "live" data. Streams are a specialization of the Observer design pattern and they provide asynchronous and non-blocking data flow. The ongoing effort of the ReactiveX initiative is one example that demonstrates how demanding this technology is even for big companies. Bridging the discrepancies of different technologies with common interfaces is already done by the Reactive Streams initiative and, in the JVM world, via reactive-streams-jvm interfaces. Streaming Pool is a framework for providing and discovering reactive streams. Through the mechanism of dependency injection provided by the Spring Framework, Streaming Pool provides a so called Discovery Service. This object can discover and chain streams of data that are technologically agnostic, through the use of Stream IDs. The stream to ...

  14. Did You Listen to the Beat? Auditory Steady-State Responses in the Human Electroencephalogram at 4 and 7 Hz Modulation Rates Reflect Selective Attention.

    Science.gov (United States)

    Jaeger, Manuela; Bleichner, Martin G; Bauer, Anna-Katharina R; Mirkovic, Bojana; Debener, Stefan

    2018-02-27

    The acoustic envelope of human speech correlates with the syllabic rate (4-8 Hz) and carries important information for intelligibility, which is typically compromised in multi-talker, noisy environments. In order to better understand the dynamics of selective auditory attention to low frequency modulated sound sources, we conducted a two-stream auditory steady-state response (ASSR) selective attention electroencephalogram (EEG) study. The two streams consisted of 4 and 7 Hz amplitude and frequency modulated sounds presented from the left and right side. One of two streams had to be attended while the other had to be ignored. The attended stream always contained a target, allowing for the behavioral confirmation of the attention manipulation. EEG ASSR power analysis revealed a significant increase in 7 Hz power for the attend compared to the ignore conditions. There was no significant difference in 4 Hz power when the 4 Hz stream had to be attended compared to when it had to be ignored. This lack of 4 Hz attention modulation could be explained by a distracting effect of a third frequency at 3 Hz (beat frequency) perceivable when the 4 and 7 Hz streams are presented simultaneously. Taken together our results show that low frequency modulations at syllabic rate are modulated by selective spatial attention. Whether attention effects act as enhancement of the attended stream or suppression of to be ignored stream may depend on how well auditory streams can be segregated.

  15. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  16. Streams and their future inhabitants

    DEFF Research Database (Denmark)

    Sand-Jensen, K.; Friberg, Nikolai

    2006-01-01

    In this fi nal chapter we look ahead and address four questions: How do we improve stream management? What are the likely developments in the biological quality of streams? In which areas is knowledge on stream ecology insuffi cient? What can streams offer children of today and adults of tomorrow?...

  17. Competition and convergence between auditory and cross-modal visual inputs to primary auditory cortical areas

    Science.gov (United States)

    Mao, Yu-Ting; Hua, Tian-Miao

    2011-01-01

    Sensory neocortex is capable of considerable plasticity after sensory deprivation or damage to input pathways, especially early in development. Although plasticity can often be restorative, sometimes novel, ectopic inputs invade the affected cortical area. Invading inputs from other sensory modalities may compromise the original function or even take over, imposing a new function and preventing recovery. Using ferrets whose retinal axons were rerouted into auditory thalamus at birth, we were able to examine the effect of varying the degree of ectopic, cross-modal input on reorganization of developing auditory cortex. In particular, we assayed whether the invading visual inputs and the existing auditory inputs competed for or shared postsynaptic targets and whether the convergence of input modalities would induce multisensory processing. We demonstrate that although the cross-modal inputs create new visual neurons in auditory cortex, some auditory processing remains. The degree of damage to auditory input to the medial geniculate nucleus was directly related to the proportion of visual neurons in auditory cortex, suggesting that the visual and residual auditory inputs compete for cortical territory. Visual neurons were not segregated from auditory neurons but shared target space even on individual target cells, substantially increasing the proportion of multisensory neurons. Thus spatial convergence of visual and auditory input modalities may be sufficient to expand multisensory representations. Together these findings argue that early, patterned visual activity does not drive segregation of visual and auditory afferents and suggest that auditory function might be compromised by converging visual inputs. These results indicate possible ways in which multisensory cortical areas may form during development and evolution. They also suggest that rehabilitative strategies designed to promote recovery of function after sensory deprivation or damage need to take into

  18. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  19. First Branchial Cleft Fistula Associated with External Auditory Canal Stenosis and Middle Ear Cholesteatoma

    Directory of Open Access Journals (Sweden)

    shahin abdollahi fakhim

    2014-10-01

    Full Text Available Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal.   Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection.   Conclusion:  It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.

  20. Auditory Pattern Memory and Group Signal Detection

    National Research Council Canada - National Science Library

    Sorkin, Robert

    1997-01-01

    .... The experiments with temporally-coded auditory patterns showed how listeners' attention is influenced by the position and the amount of information carried by different segments of the pattern...

  1. Auditory Peripheral Processing of Degraded Speech

    National Research Council Canada - National Science Library

    Ghitza, Oded

    2003-01-01

    ...". The underlying thesis is that the auditory periphery contributes to the robust performance of humans in speech reception in noise through a concerted contribution of the efferent feedback system...

  2. Robotic Discovery of the Auditory Scene

    National Research Council Canada - National Science Library

    Martinson, E; Schultz, A

    2007-01-01

    .... Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act...

  3. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  4. Presbycusis and auditory brainstem responses: a review

    Directory of Open Access Journals (Sweden)

    Shilpa Khullar

    2011-06-01

    Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.

  5. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  6. Auditory-vocal mirroring in songbirds.

    Science.gov (United States)

    Mooney, Richard

    2014-01-01

    Mirror neurons are theorized to serve as a neural substrate for spoken language in humans, but the existence and functions of auditory-vocal mirror neurons in the human brain remain largely matters of speculation. Songbirds resemble humans in their capacity for vocal learning and depend on their learned songs to facilitate courtship and individual recognition. Recent neurophysiological studies have detected putative auditory-vocal mirror neurons in a sensorimotor region of the songbird's brain that plays an important role in expressive and receptive aspects of vocal communication. This review discusses the auditory and motor-related properties of these cells, considers their potential role on song learning and communication in relation to classical studies of birdsong, and points to the circuit and developmental mechanisms that may give rise to auditory-vocal mirroring in the songbird's brain.

  7. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  8. Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.

    Science.gov (United States)

    Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B

    2018-02-01

    Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights

  9. Fluoxetine pretreatment promotes neuronal survival and maturation after auditory fear conditioning in the rat amygdala.

    Directory of Open Access Journals (Sweden)

    Lizhu Jiang

    Full Text Available The amygdala is a critical brain region for auditory fear conditioning, which is a stressful condition for experimental rats. Adult neurogenesis in the dentate gyrus (DG of the hippocampus, known to be sensitive to behavioral stress and treatment of the antidepressant fluoxetine (FLX, is involved in the formation of hippocampus-dependent memories. Here, we investigated whether neurogenesis also occurs in the amygdala and contributes to auditory fear memory. In rats showing persistent auditory fear memory following fear conditioning, we found that the survival of new-born cells and the number of new-born cells that differentiated into mature neurons labeled by BrdU and NeuN decreased in the amygdala, but the number of cells that developed into astrocytes labeled by BrdU and GFAP increased. Chronic pretreatment with FLX partially rescued the reduction in neurogenesis in the amygdala and slightly suppressed the maintenance of the long-lasting auditory fear memory 30 days after the fear conditioning. The present results suggest that adult neurogenesis in the amygdala is sensitive to antidepressant treatment and may weaken long-lasting auditory fear memory.

  10. From sensory to long-term memory: evidence from auditory memory reactivation studies.

    Science.gov (United States)

    Winkler, István; Cowan, Nelson

    2005-01-01

    Everyday experience tells us that some types of auditory sensory information are retained for long periods of time. For example, we are able to recognize friends by their voice alone or identify the source of familiar noises even years after we last heard the sounds. It is thus somewhat surprising that the results of most studies of auditory sensory memory show that acoustic details, such as the pitch of a tone, fade from memory in ca. 10-15 s. One should, therefore, ask (1) what types of acoustic information can be retained for a longer term, (2) what circumstances allow or help the formation of durable memory records for acoustic details, and (3) how such memory records can be accessed. The present review discusses the results of experiments that used a model of auditory recognition, the auditory memory reactivation paradigm. Results obtained with this paradigm suggest that the brain stores features of individual sounds embedded within representations of acoustic regularities that have been detected for the sound patterns and sequences in which the sounds appeared. Thus, sounds closely linked with their auditory context are more likely to be remembered. The representations of acoustic regularities are automatically activated by matching sounds, enabling object recognition.

  11. Concept Formation Skills in Long-Term Cochlear Implant Users

    OpenAIRE

    Castellanos, Irina; Kronenberger, William G.; Beer, Jessica; Colson, Bethany G.; Henning, Shirley C.; Ditmars, Allison; Pisoni, David B.

    2014-01-01

    This study investigated if a period of auditory sensory deprivation followed by degraded auditory input and related language delays affects visual concept formation skills in long-term prelingually deaf cochlear implant (CI) users. We also examined if concept formation skills are mediated or moderated by other neurocognitive domains (i.e., language, working memory, and executive control). Relative to normally hearing (NH) peers, CI users displayed significantly poorer performance in several s...

  12. In search of an auditory engram

    OpenAIRE

    Fritz, Jonathan; Mishkin, Mortimer; Saunders, Richard C.

    2005-01-01

    Monkeys trained preoperatively on a task designed to assess auditory recognition memory were impaired after removal of either the rostral superior temporal gyrus or the medial temporal lobe but were unaffected by lesions of the rhinal cortex. Behavioral analysis indicated that this result occurred because the monkeys did not or could not use long-term auditory recognition, and so depended instead on short-term working memory, which is unaffected by rhinal lesions. The findings suggest that mo...

  13. Neural oscillations in auditory working memory

    OpenAIRE

    Wilsch, A.

    2015-01-01

    The present thesis investigated memory load and memory decay in auditory working memory. Alpha power as a marker for memory load served as the primary indicator for load and decay fluctuations hypothetically reflecting functional inhibition of irrelevant information. Memory load was induced by presenting auditory signals (syllables and pure-tone sequences) in noise because speech-in-noise has been shown before to increase memory load. The aim of the thesis was to assess with magnetoencephalog...

  14. Perceptual consequences of disrupted auditory nerve activity.

    Science.gov (United States)

    Zeng, Fan-Gang; Kong, Ying-Yee; Michalewski, Henry J; Starr, Arnold

    2005-06-01

    Perceptual consequences of disrupted auditory nerve activity were systematically studied in 21 subjects who had been clinically diagnosed with auditory neuropathy (AN), a recently defined disorder characterized by normal outer hair cell function but disrupted auditory nerve function. Neurological and electrophysical evidence suggests that disrupted auditory nerve activity is due to desynchronized or reduced neural activity or both. Psychophysical measures showed that the disrupted neural activity has minimal effects on intensity-related perception, such as loudness discrimination, pitch discrimination at high frequencies, and sound localization using interaural level differences. In contrast, the disrupted neural activity significantly impairs timing related perception, such as pitch discrimination at low frequencies, temporal integration, gap detection, temporal modulation detection, backward and forward masking, signal detection in noise, binaural beats, and sound localization using interaural time differences. These perceptual consequences are the opposite of what is typically observed in cochlear-impaired subjects who have impaired intensity perception but relatively normal temporal processing after taking their impaired intensity perception into account. These differences in perceptual consequences between auditory neuropathy and cochlear damage suggest the use of different neural codes in auditory perception: a suboptimal spike count code for intensity processing, a synchronized spike code for temporal processing, and a duplex code for frequency processing. We also proposed two underlying physiological models based on desynchronized and reduced discharge in the auditory nerve to successfully account for the observed neurological and behavioral data. These methods and measures cannot differentiate between these two AN models, but future studies using electric stimulation of the auditory nerve via a cochlear implant might. These results not only show the unique

  15. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  16. Procedures for central auditory processing screening in schoolchildren.

    Science.gov (United States)

    Carvalho, Nádia Giulian de; Ubiali, Thalita; Amaral, Maria Isabel Ramos do; Santos, Maria Francisca Colella

    2018-03-22

    Central auditory processing screening in schoolchildren has led to debates in literature, both regarding the protocol to be used and the importance of actions aimed at prevention and promotion of auditory health. Defining effective screening procedures for central auditory processing is a challenge in Audiology. This study aimed to analyze the scientific research on central auditory processing screening and discuss the effectiveness of the procedures utilized. A search was performed in the SciELO and PUBMed databases by two researchers. The descriptors used in Portuguese and English were: auditory processing, screening, hearing, auditory perception, children, auditory tests and their respective terms in Portuguese. original articles involving schoolchildren, auditory screening of central auditory skills and articles in Portuguese or English. studies with adult and/or neonatal populations, peripheral auditory screening only, and duplicate articles. After applying the described criteria, 11 articles were included. At the international level, central auditory processing screening methods used were: screening test for auditory processing disorder and its revised version, screening test for auditory processing, scale of auditory behaviors, children's auditory performance scale and Feather Squadron. In the Brazilian scenario, the procedures used were the simplified auditory processing assessment and Zaidan's battery of tests. At the international level, the screening test for auditory processing and Feather Squadron batteries stand out as the most comprehensive evaluation of hearing skills. At the national level, there is a paucity of studies that use methods evaluating more than four skills, and are normalized by age group. The use of simplified auditory processing assessment and questionnaires can be complementary in the search for an easy access and low-cost alternative in the auditory screening of Brazilian schoolchildren. Interactive tools should be proposed, that

  17. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  18. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Stellar Streams Discovered in the Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Shipp, N.; et al.

    2018-01-09

    We perform a search for stellar streams around the Milky Way using the first three years of multi-band optical imaging data from the Dark Energy Survey (DES). We use DES data covering $\\sim 5000$ sq. deg. to a depth of $g > 23.5$ with a relative photometric calibration uncertainty of $< 1 \\%$. This data set yields unprecedented sensitivity to the stellar density field in the southern celestial hemisphere, enabling the detection of faint stellar streams to a heliocentric distance of $\\sim 50$ kpc. We search for stellar streams using a matched-filter in color-magnitude space derived from a synthetic isochrone of an old, metal-poor stellar population. Our detection technique recovers four previously known thin stellar streams: Phoenix, ATLAS, Tucana III, and a possible extension of Molonglo. In addition, we report the discovery of eleven new stellar streams. In general, the new streams detected by DES are fainter, more distant, and lower surface brightness than streams detected by similar techniques in previous photometric surveys. As a by-product of our stellar stream search, we find evidence for extra-tidal stellar structure associated with four globular clusters: NGC 288, NGC 1261, NGC 1851, and NGC 1904. The ever-growing sample of stellar streams will provide insight into the formation of the Galactic stellar halo, the Milky Way gravitational potential, as well as the large- and small-scale distribution of dark matter around the Milky Way.

  20. Alignment data streams for the ATLAS inner detector

    CERN Document Server

    Pinto, B; Pereira, P; Elsing, M; Hawkings, R; Schieck, J; García, S; Schaffer, A; Ma, H; Anjos, A

    2008-01-01

    The ATLAS experiment uses a complex trigger strategy to be able to reduce the Event Filter rate output, down to a level that allows the storage and processing of these data. These concepts are described in the ATLAS Computing Model which embraces Grid paradigm. The output coming from the Event Filter consists of four main streams: physical stream, express stream, calibration stream, and diagnostic stream. The calibration stream will be transferred to the Tier-0 facilities that will provide the prompt reconstruction of this stream with a minimum latency of 8 hours, producing calibration constants of sufficient quality to allow a first-pass processing. The Inner Detector community is developing and testing an independent common calibration stream selected at the Event Filter after track reconstruction. It is composed of raw data, in byte-stream format, contained in Readout Buffers (ROBs) with hit information of the selected tracks, and it will be used to derive and update a set of calibration and alignment cons...

  1. Auditory Magnetoencephalographic Frequency-Tagged Responses Mirror the Ongoing Segmentation Processes Underlying Statistical Learning.

    Science.gov (United States)

    Farthouat, Juliane; Franco, Ana; Mary, Alison; Delpouve, Julie; Wens, Vincent; Op de Beeck, Marc; De Tiège, Xavier; Peigneux, Philippe

    2017-03-01

    Humans are highly sensitive to statistical regularities in their environment. This phenomenon, usually referred as statistical learning, is most often assessed using post-learning behavioural measures that are limited by a lack of sensibility and do not monitor the temporal dynamics of learning. In the present study, we used magnetoencephalographic frequency-tagged responses to investigate the neural sources and temporal development of the ongoing brain activity that supports the detection of regularities embedded in auditory streams. Participants passively listened to statistical streams in which tones were grouped as triplets, and to random streams in which tones were randomly presented. Results show that during exposure to statistical (vs. random) streams, tritone frequency-related responses reflecting the learning of regularities embedded in the stream increased in the left supplementary motor area and left posterior superior temporal sulcus (pSTS), whereas tone frequency-related responses decreased in the right angular gyrus and right pSTS. Tritone frequency-related responses rapidly developed to reach significance after 3 min of exposure. These results suggest that the incidental extraction of novel regularities is subtended by a gradual shift from rhythmic activity reflecting individual tone succession toward rhythmic activity synchronised with triplet presentation, and that these rhythmic processes are subtended by distinct neural sources.

  2. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study.

    Science.gov (United States)

    Stekelenburg, Jeroen J; Vroomen, Jean

    2015-11-11

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one's own motor actions. Auditory potentials are also attenuated when a sound is accompanied by a video of anticipatory visual motion that reliably predicts the sound. Here, we investigated whether the neural underpinnings of prediction of upcoming auditory stimuli are similar for motor-auditory (MA) and visual-auditory (VA) events using a stimulus omission paradigm. In the MA condition, a finger tap triggered the sound of a handclap whereas in the VA condition the same sound was accompanied by a video showing the handclap. In both conditions, the auditory stimulus was omitted in either 50% or 12% of the trials. These auditory omissions induced early and mid-latency ERP components (oN1 and oN2, presumably reflecting prediction and prediction error), and subsequent higher-order error evaluation processes. The oN1 and oN2 of MA and VA were alike in amplitude, topography, and neural sources despite that the origin of the prediction stems from different brain areas (motor versus visual cortex). This suggests that MA and VA predictions activate a sensory template of the sound in auditory cortex. This article is part of a Special Issue entitled SI: Prediction and Attention. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Interaction of language, auditory and memory brain networks in auditory verbal hallucinations

    NARCIS (Netherlands)

    Curcic-Blake, Branislava; Ford, Judith M.; Hubl, Daniela; Orlov, Natasza D.; Sommer, Iris E.; Waters, Flavie; Allen, Paul; Jardri, Renaud; Woodruff, Peter W.; David, Olivier; Mulert, Christoph; Woodward, Todd S.; Aleman, Andre

    Auditory verbal hallucinations (AVH) occur in psychotic disorders, but also as a symptom of other conditions and even in healthy people. Several current theories on the origin of AVH converge, with neuroimaging studies suggesting that the language, auditory and memory/limbic networks are of

  5. External auditory canal carcinoma treatment

    International Nuclear Information System (INIS)

    Matsuda, Yoichi; Ueda, Yoshihisa; Kurita, Tomoyuki; Nakashima, Tadashi

    2010-01-01

    External auditory canal (EAC) carcinomas are relatively rare conditions lack on established treatment strategy. We analyzed a treatment modalities and outcome in 32 cases of EAC squamous cell carcinoma treated between 1980 and 2008. Subjects-17 men and 15 women ranging from 33 to 92 years old (average: 66) were divided by Arriaga's tumor staging into 12 T1, 5 T2, 6 T3, and 9 T4. Survival was calculated by the Kaplan-Meier method. Disease-specific 5-year survival was 100% for T1, T2, 44% for T3, and 33% for T4. In contrast to 100% 5-year survival for T1+T2 cancer, the 5-year survival for T3+T4 cancer was 37% with high recurrence due to positive surgical margins. The first 22 years of the 29 years surveyed, we performed surgery mainly, and irradiation or chemotherapy was selected for early disease or cases with positive surgical margins as postoperative therapy. During the 22-years, 5-year survival with T3+T4 cancer was 20%. After we started superselective intra-arterial (IA) rapid infusion chemotherapy combined with radiotherapy in 2003, we achieved negative surgical margins for advanced disease, and 5-year survival for T3+T4 cancer rise to 80%. (author)

  6. Manipulation of Auditory Inputs as Rehabilitation Therapy for Maladaptive Auditory Cortical Reorganization

    Directory of Open Access Journals (Sweden)

    Hidehiko Okamoto

    2018-01-01

    Full Text Available Neurophysiological and neuroimaging data suggest that the brains of not only children but also adults are reorganized based on sensory inputs and behaviors. Plastic changes in the brain are generally beneficial; however, maladaptive cortical reorganization in the auditory cortex may lead to hearing disorders such as tinnitus and hyperacusis. Recent studies attempted to noninvasively visualize pathological neural activity in the living human brain and reverse maladaptive cortical reorganization by the suitable manipulation of auditory inputs in order to alleviate detrimental auditory symptoms. The effects of the manipulation of auditory inputs on maladaptively reorganized brain were reviewed herein. The findings obtained indicate that rehabilitation therapy based on the manipulation of auditory inputs is an effective and safe approach for hearing disorders. The appropriate manipulation of sensory inputs guided by the visualization of pathological brain activities using recent neuroimaging techniques may contribute to the establishment of new clinical applications for affected individuals.

  7. Improvement of auditory hallucinations and reduction of primary auditory area's activation following TMS

    International Nuclear Information System (INIS)

    Giesel, Frederik L.; Mehndiratta, Amit; Hempel, Albrecht; Hempel, Eckhard; Kress, Kai R.; Essig, Marco; Schröder, Johannes

    2012-01-01

    Background: In the present case study, improvement of auditory hallucinations following transcranial magnetic stimulation (TMS) therapy was investigated with respect to activation changes of the auditory cortices. Methods: Using functional magnetic resonance imaging (fMRI), activation of the auditory cortices was assessed prior to and after a 4-week TMS series of the left superior temporal gyrus in a schizophrenic patient with medication-resistant auditory hallucinations. Results: Hallucinations decreased slightly after the third and profoundly after the fourth week of TMS. Activation in the primary auditory area decreased, whereas activation in the operculum and insula remained stable. Conclusions: Combination of TMS and repetitive fMRI is promising to elucidate the physiological changes induced by TMS.

  8. The Rabbit Stream Cipher

    DEFF Research Database (Denmark)

    Boesgaard, Martin; Vesterager, Mette; Zenner, Erik

    2008-01-01

    The stream cipher Rabbit was first presented at FSE 2003, and no attacks against it have been published until now. With a measured encryption/decryption speed of 3.7 clock cycles per byte on a Pentium III processor, Rabbit does also provide very high performance. This paper gives a concise...... description of the Rabbit design and some of the cryptanalytic results available....

  9. Music Streaming in Denmark

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Rex

    This report analyses how a ’per user’ settlement model differs from the ‘pro rata’ model currently used. The analysis is based on data for all streams by WiMP users in Denmark during August 2013. The analysis has been conducted in collaboration with Christian Schlelein from Koda on the basis of d...

  10. Academic streaming in Europe

    DEFF Research Database (Denmark)

    Falaschi, Alessandro; Mønster, Dan; Doležal, Ivan

    2004-01-01

    The TF-NETCAST task force was active from March 2003 to March 2004, and during this time the mem- bers worked on various aspects of streaming media related to the ultimate goal of setting up common services and infrastructures to enable netcasting of high quality content to the academic community...

  11. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  12. Global aspects of stream evolution in the solar wind

    International Nuclear Information System (INIS)

    Gosling, J.T.

    1984-01-01

    A spatially variable coronal expansion, when coupled with solar rotation, leads to the formation of high speed solar wind streams which evolve considerably with increasing heliocentric distance. Initially the streams steepen for simple kinematic reasons, but this steepening is resisted by pressure forces, leading eventually to the formation of forward-reverse shock pairs in the distant heliosphere. The basic physical processes responsible for stream steepening an evolution are explored and model calculations are compared with actual spacecraft observations of the process. The solar wind stream evolution problem is relatively well understood both observationally and theoretically. Tools developed in achieving this understanding should be applicable to other astrophysical systems where a spatially or temporally variable outflow is associated with a rotating object. 27 references, 13 figures

  13. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  14. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  15. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  16. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  17. Absence of auditory 'global interference' in autism.

    Science.gov (United States)

    Foxton, Jessica M; Stewart, Mary E; Barnard, Louise; Rodgers, Jacqui; Young, Allan H; O'Brien, Gregory; Griffiths, Timothy D

    2003-12-01

    There has been considerable recent interest in the cognitive style of individuals with Autism Spectrum Disorder (ASD). One theory, that of weak central coherence, concerns an inability to combine stimulus details into a coherent whole. Here we test this theory in the case of sound patterns, using a new definition of the details (local structure) and the coherent whole (global structure). Thirteen individuals with a diagnosis of autism or Asperger's syndrome and 15 control participants were administered auditory tests, where they were required to match local pitch direction changes between two auditory sequences. When the other local features of the sequence pairs were altered (the actual pitches and relative time points of pitch direction change), the control participants obtained lower scores compared with when these details were left unchanged. This can be attributed to interference from the global structure, defined as the combination of the local auditory details. In contrast, the participants with ASD did not obtain lower scores in the presence of such mismatches. This was attributed to the absence of interference from an auditory coherent whole. The results are consistent with the presence of abnormal interactions between local and global auditory perception in ASD.

  18. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  19. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  20. Auditory memory for temporal characteristics of sound.

    Science.gov (United States)

    Zokoll, Melanie A; Klump, Georg M; Langemann, Ulrike

    2008-05-01

    This study evaluates auditory memory for variations in the rate of sinusoidal amplitude modulation (SAM) of noise bursts in the European starling (Sturnus vulgaris). To estimate the extent of the starling's auditory short-term memory store, a delayed non-matching-to-sample paradigm was applied. The birds were trained to discriminate between a series of identical "sample stimuli" and a single "test stimulus". The birds classified SAM rates of sample and test stimuli as being either the same or different. Memory performance of the birds was measured as the percentage of correct classifications. Auditory memory persistence time was estimated as a function of the delay between sample and test stimuli. Memory performance was significantly affected by the delay between sample and test and by the number of sample stimuli presented before the test stimulus, but was not affected by the difference in SAM rate between sample and test stimuli. The individuals' auditory memory persistence times varied between 2 and 13 s. The starlings' auditory memory persistence in the present study for signals varying in the temporal domain was significantly shorter compared to that of a previous study (Zokoll et al. in J Acoust Soc Am 121:2842, 2007) applying tonal stimuli varying in the spectral domain.

  1. Streaming Italian horror cinema in the United Kingdom: Lovefilm Instant

    OpenAIRE

    Baschiera, Stefano

    2017-01-01

    This article investigates the distribution of Italian horror cinema in the age of video streaming, analyzing its presence and categorization on the platform Lovefilm Instant UK, in order to investigate the importance of ‘niche’ in what is known as the long tail of online distribution and the online availability of exploitation films. I argue that looking at the streaming presence of Italian horror and comparing it to its prior distribution on home video formats (in particular VHS and DVD) we ...

  2. Temporal integration: intentional sound discrimination does not modulate stimulus-driven processes in auditory event synthesis.

    Science.gov (United States)

    Sussman, Elyse; Winkler, István; Kreuzer, Judith; Saher, Marieke; Näätänen, Risto; Ritter, Walter

    2002-12-01

    Our previous study showed that the auditory context could influence whether two successive acoustic changes occurring within the temporal integration window (approximately 200ms) were pre-attentively encoded as a single auditory event or as two discrete events (Cogn Brain Res 12 (2001) 431). The aim of the current study was to assess whether top-down processes could influence the stimulus-driven processes in determining what constitutes an auditory event. Electroencepholagram (EEG) was recorded from 11 scalp electrodes to frequently occurring standard and infrequently occurring deviant sounds. Within the stimulus blocks, deviants either occurred only in pairs (successive feature changes) or both singly and in pairs. Event-related potential indices of change and target detection, the mismatch negativity (MMN) and the N2b component, respectively, were compared with the simultaneously measured performance in discriminating the deviants. Even though subjects could voluntarily distinguish the two successive auditory feature changes from each other, which was also indicated by the elicitation of the N2b target-detection response, top-down processes did not modify the event organization reflected by the MMN response. Top-down processes can extract elemental auditory information from a single integrated acoustic event, but the extraction occurs at a later processing stage than the one whose outcome is indexed by MMN. Initial processes of auditory event-formation are fully governed by the context within which the sounds occur. Perception of the deviants as two separate sound events (the top-down effects) did not change the initial neural representation of the same deviants as one event (indexed by the MMN), without a corresponding change in the stimulus-driven sound organization.

  3. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  4. Comet P/Machholtz and the Quadrantid meteor stream

    International Nuclear Information System (INIS)

    Mcintosh, B.A.

    1990-01-01

    Attention is drawn to the suggestive similarities between the calculated perturbation behavior of Comet P/Machholtz 1986 VIII, on the one hand, and on the other those of the Quadrantid, Delta Aquarid, and Arietid meteor streams. There appears to be adequate evidence for the formation by the Comets P/Machholtz and 1491-I, together with the three meteor streams, of a related complex controlled by Jupiter's gravitational perturbations; there is no comparably compelling information, however, bearing on the questions of parent-offspring or sibling relationships among these comets and meteor streams. 13 refs

  5. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  6. Analysis of hydraulic characteristics for stream diversion in small stream

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang-Jin; Jun, Kye-Won [Chungbuk National University, Cheongju(Korea)

    2001-10-31

    This study is the analysis of hydraulic characteristics for stream diversion reach by numerical model test. Through it we can provide the basis data in flood, and in grasping stream flow characteristics. Analysis of hydraulic characteristics in Seoknam stream were implemented by using computer model HEC-RAS(one-dimensional model) and RMA2(two-dimensional finite element model). As a result we became to know that RMA2 to simulate left, main channel, right in stream is more effective method in analysing flow in channel bends, steep slope, complex bed form effect stream flow characteristics, than HEC-RAS. (author). 13 refs., 3 tabs., 5 figs.

  7. Effects of agricultural and urban impacts on macroinvertebrates assemblages in streams (Rio Grande do Sul, Brazil

    Directory of Open Access Journals (Sweden)

    Luiz Ubiratan Hepp

    2010-02-01

    Full Text Available This study evaluates the effects of agricultural and urban activities on the structure and composition of benthic communities of streams in the state of Rio Grande do Sul, Brazil. Benthic macroinvertebrates were collected in streams influenced by urbanization and agriculture and in streams with no anthropogenic disturbances (reference streams. Organism density was superior in urban streams when compared with streams in the other two areas. The taxonomic richness and Shannon diversity index were higher in reference streams. The benthic fauna composition was significantly different among land uses. The classification and ordination analyses corroborated the results of variance analyses demonstrating the formation of clusters corresponding to streams with similar land use. Seasonality was also found to influence the benthic community, though in a lesser degree than land use.

  8. Visual and Auditory Memory in Spelling: An Exploratory Study

    Science.gov (United States)

    Day, J. B.; Wedell, K.

    1972-01-01

    Using visual and auditory memory sequencing tests with 140 children aged 8-10, this study aimed to investigate the assumption that visual and auditory memory are important component functions in children's spelling. (Author)

  9. Naftidrofuryl affects neurite regeneration by injured adult auditory neurons.

    Science.gov (United States)

    Lefebvre, P P; Staecker, H; Moonen, G; van de Water, T R

    1993-07-01

    Afferent auditory neurons are essential for the transmission of auditory information from Corti's organ to the central auditory pathway. Auditory neurons are very sensitive to acute insult and have a limited ability to regenerate injured neuronal processes. Therefore, these neurons appear to be a limiting factor in restoration of hearing function following an injury to the peripheral auditory receptor. In a previous study nerve growth factor (NGF) was shown to stimulate neurite repair but not survival of injured auditory neurons. In this study, we have demonstrated a neuritogenesis promoting effect of naftidrofuryl in an vitro model for injury to adult auditory neurons, i.e. dissociated cell cultures of adult rat spiral ganglia. Conversely, naftidrofuryl did not have any demonstrable survival promoting effect on these in vitro preparations of injured auditory neurons. The potential uses of this drug as a therapeutic agent in acute diseases of the inner ear are discussed in the light of these observations.

  10. Central auditory processing outcome after stroke in children

    Directory of Open Access Journals (Sweden)

    Karla M. I. Freiria Elias

    2014-09-01

    Full Text Available Objective To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. Method 23 children (13 male between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure; dichotic digit test and staggered spondaic word test (selective attention; pitch pattern and duration pattern sequence tests (temporal processing and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Results Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Conclusion Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.

  11. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood

  12. Source reliability in auditory health persuasion : Its antecedents and consequences

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie

    2015-01-01

    Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by

  13. Musical experience shapes top-down auditory mechanisms: evidence from masking and auditory attention performance.

    Science.gov (United States)

    Strait, Dana L; Kraus, Nina; Parbery-Clark, Alexandra; Ashley, Richard

    2010-03-01

    A growing body of research suggests that cognitive functions, such as attention and memory, drive perception by tuning sensory mechanisms to relevant acoustic features. Long-term musical experience also modulates lower-level auditory function, although the mechanisms by which this occurs remain uncertain. In order to tease apart the mechanisms that drive perceptual enhancements in musicians, we posed the question: do well-developed cognitive abilities fine-tune auditory perception in a top-down fashion? We administered a standardized battery of perceptual and cognitive tests to adult musicians and non-musicians, including tasks either more or less susceptible to cognitive control (e.g., backward versus simultaneous masking) and more or less dependent on auditory or visual processing (e.g., auditory versus visual attention). Outcomes indicate lower perceptual thresholds in musicians specifically for auditory tasks that relate with cognitive abilities, such as backward masking and auditory attention. These enhancements were observed in the absence of group differences for the simultaneous masking and visual attention tasks. Our results suggest that long-term musical practice strengthens cognitive functions and that these functions benefit auditory skills. Musical training bolsters higher-level mechanisms that, when impaired, relate to language and literacy deficits. Thus, musical training may serve to lessen the impact of these deficits by strengthening the corticofugal system for hearing. 2009 Elsevier B.V. All rights reserved.

  14. High resolution computed tomography of auditory ossicles

    International Nuclear Information System (INIS)

    Isono, M.; Murata, K.; Ohta, F.; Yoshida, A.; Ishida, O.; Kinki Univ., Osaka

    1990-01-01

    Auditory ossicular sections were scanned at section thicknesses (mm)/section interspaces (mm) of 1.5/1.5 (61 patients), 1.0/1.0 (13 patients) or 1.5/1.0 (33 patients). At any type of section thickness/interspace, the malleal and incudal structures were observed with almost equal frequency. The region of the incudostapedial joint and each component part of the stapes were shown more frequently at a section interspace of 1.0 mm than at 1.5 mm. The visualization frequency of each auditory ossicular component on two or more serial sections was investigated. At a section thickness/section interspace of 1.5/1.5, the visualization rates were low except for large components such as the head of the malleus and the body of the incus, but at a slice interspace of 1.0 mm, they were high for most components of the auditory ossicles. (orig.)

  15. The Relationship between Types of Attention and Auditory Processing Skills: Reconsidering Auditory Processing Disorder Diagnosis

    Science.gov (United States)

    Stavrinos, Georgios; Iliadou, Vassiliki-Maria; Edwards, Lindsey; Sirimanna, Tony; Bamiou, Doris-Eva

    2018-01-01

    Measures of attention have been found to correlate with specific auditory processing tests in samples of children suspected of Auditory Processing Disorder (APD), but these relationships have not been adequately investigated. Despite evidence linking auditory attention and deficits/symptoms of APD, measures of attention are not routinely used in APD diagnostic protocols. The aim of the study was to examine the relationship between auditory and visual attention tests and auditory processing tests in children with APD and to assess whether a proposed diagnostic protocol for APD, including measures of attention, could provide useful information for APD management. A pilot study including 27 children, aged 7–11 years, referred for APD assessment was conducted. The validated test of everyday attention for children, with visual and auditory attention tasks, the listening in spatialized noise sentences test, the children's communication checklist questionnaire and tests from a standard APD diagnostic test battery were administered. Pearson's partial correlation analysis examining the relationship between these tests and Cochrane's Q test analysis comparing proportions of diagnosis under each proposed battery were conducted. Divided auditory and divided auditory-visual attention strongly correlated with the dichotic digits test, r = 0.68, p attention battery identified as having Attention Deficits (ADs). The proposed APD battery excluding AD cases did not have a significantly different diagnosis proportion than the standard APD battery. Finally, the newly proposed diagnostic battery, identifying an inattentive subtype of APD, identified five children who would have otherwise been considered not having ADs. The findings show that a subgroup of children with APD demonstrates underlying sustained and divided attention deficits. Attention deficits in children with APD appear to be centred around the auditory modality but further examination of types of attention in both

  16. What determines auditory distraction? On the roles of local auditory changes and expectation violations.

    Directory of Open Access Journals (Sweden)

    Jan P Röer

    Full Text Available Both the acoustic variability of a distractor sequence and the degree to which it violates expectations are important determinants of auditory distraction. In four experiments we examined the relative contribution of local auditory changes on the one hand and expectation violations on the other hand in the disruption of serial recall by irrelevant sound. We present evidence for a greater disruption by auditory sequences ending in unexpected steady state distractor repetitions compared to auditory sequences with expected changing state endings even though the former contained fewer local changes. This effect was demonstrated with piano melodies (Experiment 1 and speech distractors (Experiment 2. Furthermore, it was replicated when the expectation violation occurred after the encoding of the target items (Experiment 3, indicating that the items' maintenance in short-term memory was disrupted by attentional capture and not their encoding. This seems to be primarily due to the violation of a model of the specific auditory distractor sequences because the effect vanishes and even reverses when the experiment provides no opportunity to build up a specific neural model about the distractor sequence (Experiment 4. Nevertheless, the violation of abstract long-term knowledge about auditory regularities seems to cause a small and transient capture effect: Disruption decreased markedly over the course of the experiments indicating that participants habituated to the unexpected distractor repetitions across trials. The overall pattern of results adds to the growing literature that the degree to which auditory distractors violate situation-specific expectations is a more important determinant of auditory distraction than the degree to which a distractor sequence contains local auditory changes.

  17. Streaming gravity mode instability

    International Nuclear Information System (INIS)

    Wang Shui.

    1989-05-01

    In this paper, we study the stability of a current sheet with a sheared flow in a gravitational field which is perpendicular to the magnetic field and plasma flow. This mixing mode caused by a combined role of the sheared flow and gravity is named the streaming gravity mode instability. The conditions of this mode instability are discussed for an ideal four-layer model in the incompressible limit. (author). 5 refs

  18. Autonomous Byte Stream Randomizer

    Science.gov (United States)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  19. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  20. The LHCb Turbo stream

    Energy Technology Data Exchange (ETDEWEB)

    Puig, A., E-mail: albert.puig@cern.ch

    2016-07-11

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 with a selection of physics analyses. It is anticipated that the turbo stream will be adopted by an increasing number of analyses during the remainder of LHC Run II (2015–2018) and ultimately in Run III (starting in 2020) with the upgraded LHCb detector.

  1. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques.

    Science.gov (United States)

    Kryklywy, James H; Macpherson, Ewan A; Mitchell, Derek G V

    2018-04-01

    Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory 'what' but not 'where' processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.

  2. Exploring the extent and function of higher-order auditory cortex in rhesus monkeys.

    Science.gov (United States)

    Poremba, Amy; Mishkin, Mortimer

    2007-07-01

    Just as cortical visual processing continues far beyond the boundaries of early visual areas, so too does cortical auditory processing continue far beyond the limits of early auditory areas. In passively listening rhesus monkeys examined with metabolic mapping techniques, cortical areas reactive to auditory stimulation were found to include the entire length of the superior temporal gyrus (STG) as well as several other regions within the temporal, parietal, and frontal lobes. Comparison of these widespread activations with those from an analogous study in vision supports the notion that audition, like vision, is served by several cortical processing streams, each specialized for analyzing a different aspect of sensory input, such as stimulus quality, location, or motion. Exploration with different classes of acoustic stimuli demonstrated that most portions of STG show greater activation on the right than on the left regardless of stimulus class. However, there is a striking shift to left-hemisphere "dominance" during passive listening to species-specific vocalizations, though this reverse asymmetry is observed only in the region of temporal pole. The mechanism for this left temporal pole "dominance" appears to be suppression of the right temporal pole by the left hemisphere, as demonstrated by a comparison of the results in normal monkeys with those in split-brain monkeys.

  3. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinver...

  4. Stream processing health card application.

    Science.gov (United States)

    Polat, Seda; Gündem, Taflan Imre

    2012-10-01

    In this paper, we propose a data stream management system embedded to a smart card for handling and storing user specific summaries of streaming data coming from medical sensor measurements and/or other medical measurements. The data stream management system that we propose for a health card can handle the stream data rates of commonly known medical devices and sensors. It incorporates a type of context awareness feature that acts according to user specific information. The proposed system is cheap and provides security for private data by enhancing the capabilities of smart health cards. The stream data management system is tested on a real smart card using both synthetic and real data.

  5. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    -ear transfer function), the asymmetry of the auditory filter changed from steeper high-frequency slopes at 1000 Hz to steeper low-frequency slopes below 100 Hz. Increasing steepness at low-frequencies of the middle-ear high-pass filter is thought to cause this effect. The dynamic range of the auditory filter...... was found to steadily decrease with decreasing center frequency. Although the observed decrease in filter bandwidth with decreasing center frequency was only approximately monotonic, the preliminary data indicates the filter bandwidth does not stabilize around 100 Hz, e.g. it still decreases below...

  6. The storage and recall of auditory memory.

    Science.gov (United States)

    Nebenzahl, I; Albeck, Y

    1990-01-01

    The architecture of the auditory memory is investigated. The auditory information is assumed to be represented by f-t patterns. With the help of a psycho-physical experiment it is demonstrated that the storage of these patterns is highly folded in the sense that a long signal is broken into many short stretches before being stored in the memory. Recognition takes place by correlating newly heard input in the short term memory to information previously stored in the long term memory. We show that this correlation is performed after the input is accumulated and held statically in the short term memory.

  7. Auditory processing in autism spectrum disorder

    DEFF Research Database (Denmark)

    Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher

    2017-01-01

    Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....

  8. Feature conjunctions and auditory sensory memory.

    Science.gov (United States)

    Sussman, E; Gomes, H; Nousak, J M; Ritter, W; Vaughan, H G

    1998-05-18

    This study sought to obtain additional evidence that transient auditory memory stores information about conjunctions of features on an automatic basis. The mismatch negativity of event-related potentials was employed because its operations are based on information that is stored in transient auditory memory. The mismatch negativity was found to be elicited by a tone that differed from standard tones in a combination of its perceived location and frequency. The result lends further support to the hypothesis that the system upon which the mismatch negativity relies processes stimuli in an holistic manner. Copyright 1998 Elsevier Science B.V.

  9. Auditory Hypersensitivity in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Lucker, Jay R.

    2013-01-01

    A review of records was completed to determine whether children with auditory hypersensitivities have difficulty tolerating loud sounds due to auditory-system factors or some other factors not directly involving the auditory system. Records of 150 children identified as not meeting autism spectrum disorders (ASD) criteria and another 50 meeting…

  10. Comorbidity of Auditory Processing, Language, and Reading Disorders

    Science.gov (United States)

    Sharma, Mridula; Purdy, Suzanne C.; Kelly, Andrea S.

    2009-01-01

    Purpose: The authors assessed comorbidity of auditory processing disorder (APD), language impairment (LI), and reading disorder (RD) in school-age children. Method: Children (N = 68) with suspected APD and nonverbal IQ standard scores of 80 or more were assessed using auditory, language, reading, attention, and memory measures. Auditory processing…

  11. A test battery measuring auditory capabilities of listening panels

    DEFF Research Database (Denmark)

    Ghani, Jody; Ellermeier, Wolfgang; Zimmer, Karin

    2005-01-01

    a battery of tests covering a larger range of auditory capabilities in order to assess individual listeners. The format of all tests is kept as 'objective' as possible by using a three-alternative forced-choice paradigm in which the subject must choose which of the sound samples is different, thus keeping...... the instruction to the subjects simple and common for all tests. Both basic (e.g. frequency discrimination) and complex (e.g. profile analysis) psychoacoustic tests are covered in the battery and a threshold of discrimination or detection is obtained for each test. Data were collected on 24 listeners who had been...... recruited for participation in an expert listening panel for evaluating the sound quality of hi-fi audio systems. The test battery data were related to the actual performance of the listeners when judging the degradation in quality produced by audio codecs....

  12. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  13. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans.

    Science.gov (United States)

    Cui, Zhuang; Wang, Qian; Gao, Yayue; Wang, Jing; Wang, Mengyang; Teng, Pengfei; Guan, Yuguang; Zhou, Jian; Li, Tianfu; Luan, Guoming; Li, Liang

    2017-01-01

    The arrival of sound signals in the auditory cortex (AC) triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC) and extrinsic functional connectivity (eFC) of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices). Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  14. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  15. Translational control of auditory imprinting and structural plasticity by eIF2α

    Science.gov (United States)

    Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L

    2016-01-01

    The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders. DOI: http://dx.doi.org/10.7554/eLife.17197.001 PMID:28009255

  16. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    2010-12-01

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  17. Auditory temporal preparation induced by rhythmic cues during concurrent auditory working memory tasks.

    Science.gov (United States)

    Cutanda, Diana; Correa, Ángel; Sanabria, Daniel

    2015-06-01

    The present study investigated whether participants can develop temporal preparation driven by auditory isochronous rhythms when concurrently performing an auditory working memory (WM) task. In Experiment 1, participants had to respond to an auditory target presented after a regular or an irregular sequence of auditory stimuli while concurrently performing a Sternberg-type WM task. Results showed that participants responded faster after regular compared with irregular rhythms and that this effect was not affected by WM load; however, the lack of a significant main effect of WM load made it difficult to draw any conclusion regarding the influence of the dual-task manipulation in Experiment 1. In order to enhance dual-task interference, Experiment 2 combined the auditory rhythm procedure with an auditory N-Back task, which required WM updating (monitoring and coding of the information) and was presumably more demanding than the mere rehearsal of the WM task used in Experiment 1. Results now clearly showed dual-task interference effects (slower reaction times [RTs] in the high- vs. the low-load condition). However, such interference did not affect temporal preparation induced by rhythms, with faster RTs after regular than after irregular sequences in the high-load and low-load conditions. These results revealed that secondary tasks demanding memory updating, relative to tasks just demanding rehearsal, produced larger interference effects on overall RTs in the auditory rhythm task. Nevertheless, rhythm regularity exerted a strong temporal preparation effect that survived the interference of the WM task even when both tasks competed for processing resources within the auditory modality. (c) 2015 APA, all rights reserved).

  18. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  19. Rhythmic walking interaction with auditory feedback

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Jylhä, Antti; Serafin, Stefania

    2015-01-01

    We present an interactive auditory display for walking with sinusoidal tones or ecological, physically-based synthetic walking sounds. The feedback is either step-based or rhythmic, with constant or adaptive tempo. In a tempo-following experiment, we investigate different interaction modes...

  20. Perceptual processing of a complex auditory context

    DEFF Research Database (Denmark)

    Quiroga Martinez, David Ricardo; Hansen, Niels Christian; Højlund, Andreas

    The mismatch negativity (MMN) is a brain response elicited by deviants in a series of repetitive sounds. It reflects the perception of change in low-level sound features and reliably measures perceptual auditory memory. However, most MMN studies use simple tone patterns as stimuli, failing...

  1. Diagnosing Dyslexia: The Screening of Auditory Laterality.

    Science.gov (United States)

    Johansen, Kjeld

    A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…

  2. Using excitation patterns to predict auditory masking

    NARCIS (Netherlands)

    Heijden, van der M.L.; Kohlrausch, A.G.

    1992-01-01

    We investigated how well auditory masking can be predicted from excitation patterns. For this purpose, a quantitative model proposed by Moore and Glasberg (1987) and Glasberg and Moore (1990) was used to calculate excitation patterns evoked by stationary sounds. We performed simulations of a number

  3. The impact of auditory feedback on neuronavigation

    NARCIS (Netherlands)

    Willems, PWA; Noordmans, HJ; van Overbeeke, JJ; Viergever, MA; Tulleken, CAF; van der Sprenkel, JWB

    Object. We aimed to develop an auditory feedback system to be used in addition to regular neuronavigation, in an attempt to improve the usefulness of the information offered by neuronavigation systems. Instrumentation. Using a serial connection, instrument co-ordinates determined by a commercially

  4. Auditory Evoked Responses in Neonates by MEG

    International Nuclear Information System (INIS)

    Hernandez-Pavon, J. C.; Sosa, M.; Lutter, W. J.; Maier, M.; Wakai, R. T.

    2008-01-01

    Magnetoencephalography is a biomagnetic technique with outstanding potential for neurodevelopmental studies. In this work, we have used MEG to determinate if newborns can discriminate between different stimuli during the first few months of life. Five neonates were stimulated during several minutes with auditory stimulation. The results suggest that the newborns are able to discriminate between different stimuli despite their early age

  5. Multivariate sensitivity to voice during auditory categorization.

    Science.gov (United States)

    Lee, Yune Sang; Peelle, Jonathan E; Kraemer, David; Lloyd, Samuel; Granger, Richard

    2015-09-01

    Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex. Copyright © 2015 the American Physiological Society.

  6. Neurogenetics and auditory processing in developmental dyslexia.

    Science.gov (United States)

    Giraud, Anne-Lise; Ramus, Franck

    2013-02-01

    Dyslexia is a polygenic developmental reading disorder characterized by an auditory/phonological deficit. Based on the latest genetic and neurophysiological studies, we propose a tentative model in which phonological deficits could arise from genetic anomalies of the cortical micro-architecture in the temporal lobe. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Auditory dysfunction in patients with Huntington's disease.

    Czech Academy of Sciences Publication Activity Database

    Profant, Oliver; Roth, J.; Bureš, Zbyněk; Balogová, Zuzana; Lišková, Irena; Betka, J.; Syka, Josef

    2017-01-01

    Roč. 128, č. 10 (2017), s. 1946-1953 ISSN 1388-2457 R&D Projects: GA ČR(CZ) GBP304/12/G069 Institutional support: RVO:68378041 ; RVO:67985904 Keywords : auditory pathology * central hearing loss * cognition Subject RIV: ED - Physiology OBOR OECD: Otorhinolaryngology; Clinical neurology (UZFG-Y) Impact factor: 3.866, year: 2016

  8. Rate of decay of auditory sensation

    NARCIS (Netherlands)

    Plomp, R.

    1964-01-01

    The rate of decay of auditory sensation was investigated by measuring the minimum silent interval that must be introduced between two noise pulses to be perceived. The value of this critical time Δt was determined for difierent intensity levels of both the first and the second pulse. It is shown

  9. Auditory memory can be object based.

    Science.gov (United States)

    Dyson, Benjamin J; Ishfaq, Feraz

    2008-04-01

    Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.

  10. Resource allocation models of auditory working memory.

    Science.gov (United States)

    Joseph, Sabine; Teki, Sundeep; Kumar, Sukhbinder; Husain, Masud; Griffiths, Timothy D

    2016-06-01

    Auditory working memory (WM) is the cognitive faculty that allows us to actively hold and manipulate sounds in mind over short periods of time. We develop here a particular perspective on WM for non-verbal, auditory objects as well as for time based on the consideration of possible parallels to visual WM. In vision, there has been a vigorous debate on whether WM capacity is limited to a fixed number of items or whether it represents a limited resource that can be allocated flexibly across items. Resource allocation models predict that the precision with which an item is represented decreases as a function of total number of items maintained in WM because a limited resource is shared among stored objects. We consider here auditory work on sequentially presented objects of different pitch as well as time intervals from the perspective of dynamic resource allocation. We consider whether the working memory resource might be determined by perceptual features such as pitch or timbre, or bound objects comprising multiple features, and we speculate on brain substrates for these behavioural models. This article is part of a Special Issue entitled SI: Auditory working memory. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Auditory Training for Children with Processing Disorders.

    Science.gov (United States)

    Katz, Jack; Cohen, Carolyn F.

    1985-01-01

    The article provides an overview of central auditory processing (CAP) dysfunction and reviews research on approaches to improve perceptual skills; to provide discrimination training for communicative and reading disorders; to increase memory and analysis skills and dichotic listening; to provide speech-in-noise training; and to amplify speech as…

  12. Mosaic evolution of the mammalian auditory periphery.

    Science.gov (United States)

    Manley, Geoffrey A

    2013-01-01

    The classical mammalian auditory periphery, i.e., the type of middle ear and coiled cochlea seen in modern therian mammals, did not arise as one unit and did not arise in all mammals. It is also not the only kind of auditory periphery seen in modern mammals. This short review discusses the fact that the constituents of modern mammalian auditory peripheries arose at different times over an extremely long period of evolution (230 million years; Ma). It also attempts to answer questions as to the selective pressures that led to three-ossicle middle ears and the coiled cochlea. Mammalian middle ears arose de novo, without an intermediate, single-ossicle stage. This event was the result of changes in eating habits of ancestral animals, habits that were unrelated to hearing. The coiled cochlea arose only after 60 Ma of mammalian evolution, driven at least partly by a change in cochlear bone structure that improved impedance matching with the middle ear of that time. This change only occurred in the ancestors of therian mammals and not in other mammalian lineages. There is no single constellation of structural features of the auditory periphery that characterizes all mammals and not even all modern mammals.

  13. Predictive uncertainty in auditory sequence processing

    DEFF Research Database (Denmark)

    Hansen, Niels Chr.; Pearce, Marcus T

    2014-01-01

    in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models...

  14. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli.

    Science.gov (United States)

    Kamke, Marc R; Harris, Jill

    2014-01-01

    The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color). In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy) more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  15. Contingent capture of involuntary visual attention interferes with detection of auditory stimuli

    Directory of Open Access Journals (Sweden)

    Marc R. Kamke

    2014-06-01

    Full Text Available The involuntary capture of attention by salient visual stimuli can be influenced by the behavioral goals of an observer. For example, when searching for a target item, irrelevant items that possess the target-defining characteristic capture attention more strongly than items not possessing that feature. Such contingent capture involves a shift of spatial attention toward the item with the target-defining characteristic. It is not clear, however, if the associated decrements in performance for detecting the target item are entirely due to involuntary orienting of spatial attention. To investigate whether contingent capture also involves a non-spatial interference, adult observers were presented with streams of visual and auditory stimuli and were tasked with simultaneously monitoring for targets in each modality. Visual and auditory targets could be preceded by a lateralized visual distractor that either did, or did not, possess the target-defining feature (a specific color. In agreement with the contingent capture hypothesis, target-colored distractors interfered with visual detection performance (response time and accuracy more than distractors that did not possess the target color. Importantly, the same pattern of results was obtained for the auditory task: visual target-colored distractors interfered with sound detection. The decrement in auditory performance following a target-colored distractor suggests that contingent capture involves a source of processing interference in addition to that caused by a spatial shift of attention. Specifically, we argue that distractors possessing the target-defining characteristic enter a capacity-limited, serial stage of neural processing, which delays detection of subsequently presented stimuli regardless of the sensory modality.

  16. Neuroimaging investigations of dorsal stream processing and effects of stimulus synchrony in schizophrenia.

    Science.gov (United States)

    Sanfratello, Lori; Aine, Cheryl; Stephen, Julia

    2018-05-25

    Impairments in auditory and visual processing are common in schizophrenia (SP). In the unisensory realm visual deficits are primarily noted for the dorsal visual stream. In addition, insensitivity to timing offsets between stimuli are widely reported for SP. The aim of the present study was to test at the physiological level differences in dorsal/ventral stream visual processing and timing sensitivity between SP and healthy controls (HC) using MEG and a simple auditory/visual task utilizing a variety of multisensory conditions. The paradigm included all combinations of synchronous/asynchronous and central/peripheral stimuli, yielding 4 task conditions. Both HC and SP groups showed activation in parietal areas (dorsal visual stream) during all multisensory conditions, with parietal areas showing decreased activation for SP relative to HC, and a significantly delayed peak of activation for SP in intraparietal sulcus (IPS). We also observed a differential effect of stimulus synchrony on HC and SP parietal response. Furthermore, a (negative) correlation was found between SP positive symptoms and activity in IPS. Taken together, our results provide evidence of impairment of the dorsal visual stream in SP during a multisensory task, along with an altered response to timing offsets between presented multisensory stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  18. Nitrogen saturation in stream ecosystems.

    Science.gov (United States)

    Earl, Stevan R; Valett, H Maurice; Webster, Jackson R

    2006-12-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer (15NO3-N) to measure uptake. Experiments were conducted in streams spanning a gradient of background N concentration. Uptake increased in four of six streams as NO3-N was incrementally elevated, indicating that these streams were not saturated. Uptake generally corresponded to Michaelis-Menten kinetics but deviated from the model in two streams where some other growth-critical factor may have been limiting. Proximity to saturation was correlated to background N concentration but was better predicted by the ratio of dissolved inorganic N (DIN) to soluble reactive phosphorus (SRP), suggesting phosphorus limitation in several high-N streams. Uptake velocity, a reflection of uptake efficiency, declined nonlinearly with increasing N amendment in all streams. At the same time, uptake velocity was highest in the low-N streams. Our conceptual model of N transport, uptake, and uptake efficiency suggests that, while streams may be active sites of N uptake on the landscape, N saturation contributes to nonlinear changes in stream N dynamics that correspond to decreased uptake efficiency.

  19. Noise perception in the workplace and auditory and extra-auditory symptoms referred by university professors.

    Science.gov (United States)

    Servilha, Emilse Aparecida Merlin; Delatti, Marina de Almeida

    2012-01-01

    To investigate the correlation between noise in the work environment and auditory and extra-auditory symptoms referred by university professors. Eighty five professors answered a questionnaire about identification, functional status, and health. The relationship between occupational noise and auditory and extra-auditory symptoms was investigated. Statistical analysis considered the significance level of 5%. None of the professors indicated absence of noise. Responses were grouped in Always (A) (n=21) and Not Always (NA) (n=63). Significant sources of noise were both the yard and another class, which were classified as high intensity; poor acoustic and echo. There was no association between referred noise and health complaints, such as digestive, hormonal, osteoarticular, dental, circulatory, respiratory and emotional complaints. There was also no association between referred noise and hearing complaints, and the group A showed higher occurrence of responses regarding noise nuisance, hearing difficulty and dizziness/vertigo, tinnitus, and earache. There was association between referred noise and voice alterations, and the group NA presented higher percentage of cases with voice alterations than the group A. The university environment was considered noisy; however, there was no association with auditory and extra-auditory symptoms. The hearing complaints were more evident among professors in the group A. Professors' health is a multi-dimensional product and, therefore, noise cannot be considered the only aggravation factor.

  20. Rapid measurement of auditory filter shape in mice using the auditory brainstem response and notched noise.

    Science.gov (United States)

    Lina, Ioan A; Lauer, Amanda M

    2013-04-01

    The notched noise method is an effective procedure for measuring frequency resolution and auditory filter shapes in both human and animal models of hearing. Briefly, auditory filter shape and bandwidth estimates are derived from masked thresholds for tones presented in noise containing widening spectral notches. As the spectral notch widens, increasingly less of the noise falls within the auditory filter and the tone becomes more detectible until the notch width exceeds the filter bandwidth. Behavioral procedures have been used for the derivation of notched noise auditory filter shapes in mice; however, the time and effort needed to train and test animals on these tasks renders a constraint on the widespread application of this testing method. As an alternative procedure, we combined relatively non-invasive auditory brainstem response (ABR) measurements and the notched noise method to estimate auditory filters in normal-hearing mice at center frequencies of 8, 11.2, and 16 kHz. A complete set of simultaneous masked thresholds for a particular tone frequency were obtained in about an hour. ABR-derived filter bandwidths broadened with increasing frequency, consistent with previous studies. The ABR notched noise procedure provides a fast alternative to estimating frequency selectivity in mice that is well-suited to high through-put or time-sensitive screening. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Absence of both auditory evoked potentials and auditory percepts dependent on timing cues.

    Science.gov (United States)

    Starr, A; McPherson, D; Patterson, J; Don, M; Luxford, W; Shannon, R; Sininger, Y; Tonakawa, L; Waring, M

    1991-06-01

    An 11-yr-old girl had an absence of sensory components of auditory evoked potentials (brainstem, middle and long-latency) to click and tone burst stimuli that she could clearly hear. Psychoacoustic tests revealed a marked impairment of those auditory perceptions dependent on temporal cues, that is, lateralization of binaural clicks, change of binaural masked threshold with changes in signal phase, binaural beats, detection of paired monaural clicks, monaural detection of a silent gap in a sound, and monaural threshold elevation for short duration tones. In contrast, auditory functions reflecting intensity or frequency discriminations (difference limens) were only minimally impaired. Pure tone audiometry showed a moderate (50 dB) bilateral hearing loss with a disproportionate severe loss of word intelligibility. Those auditory evoked potentials that were preserved included (1) cochlear microphonics reflecting hair cell activity; (2) cortical sustained potentials reflecting processing of slowly changing signals; and (3) long-latency cognitive components (P300, processing negativity) reflecting endogenous auditory cognitive processes. Both the evoked potential and perceptual deficits are attributed to changes in temporal encoding of acoustic signals perhaps occurring at the synapse between hair cell and eighth nerve dendrites. The results from this patient are discussed in relation to previously published cases with absent auditory evoked potentials and preserved hearing.

  2. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), were conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report

  3. Galaxies with jet streams

    International Nuclear Information System (INIS)

    Breuer, R.

    1981-01-01

    Describes recent research work on supersonic gas flow. Notable examples have been observed in cosmic radio sources, where jet streams of galactic dimensions sometimes occur, apparently as the result of interaction between neighbouring galaxies. The current theory of jet behaviour has been convincingly demonstrated using computer simulation. The surprisingly long-term stability is related to the supersonic velocity, and is analagous to the way in which an Appollo spacecraft re-entering the atmosphere supersonically is protected by the gas from the burning shield. (G.F.F.)

  4. Oscillating acoustic streaming jet

    International Nuclear Information System (INIS)

    Moudjed, Brahim; Botton, Valery; Henry, Daniel; Millet, Severine; Ben Hadid, Hamda; Garandet, Jean-Paul

    2014-01-01

    The present paper provides the first experimental investigation of an oscillating acoustic streaming jet. The observations are performed in the far field of a 2 MHz circular plane ultrasound transducer introduced in a rectangular cavity filled with water. Measurements are made by Particle Image Velocimetry (PIV) in horizontal and vertical planes near the end of the cavity. Oscillations of the jet appear in this zone, for a sufficiently high Reynolds number, as an intermittent phenomenon on an otherwise straight jet fluctuating in intensity. The observed perturbation pattern is similar to that of former theoretical studies. This intermittently oscillatory behavior is the first step to the transition to turbulence. (authors)

  5. The metaphors we stream by: Making sense of music streaming

    OpenAIRE

    Hagen, Anja Nylund

    2016-01-01

    In Norway music-streaming services have become mainstream in everyday music listening. This paper examines how 12 heavy streaming users make sense of their experiences with Spotify and WiMP Music (now Tidal). The analysis relies on a mixed-method qualitative study, combining music-diary self-reports, online observation of streaming accounts, Facebook and last.fm scrobble-logs, and in-depth interviews. By drawing on existing metaphors of Internet experiences we demonstrate that music-streaming...

  6. Novel graphical approach as fouling pinch for increasing fouling formation period in heat exchanger network (HEN) state of the art

    International Nuclear Information System (INIS)

    Azad, Abazar Vahdat; Ghaebi, Hadi; Amidpour, Majid

    2011-01-01

    In this paper a new graphical tool is proposed for investigation of fouling formation period in heat exchanger networks (HEN). The objective of this paper is increasing the time that HEN can perform its desirable heat transfer operation without required cleaning process. In a typical heat exchanger network, fouling formation rate of some streams is more than other ones. The method obtained in this work is based on given more opportunity for fouling formation for streams with high fouling formation rate. In fact high fouling formation rate streams are replaced with low fouling formation rate streams between different heat exchangers so that more fouling formation opportunity may be given for HEN. Therefore the HEN cleaning time decreases in fixed time period and the high fouling formation streams should pass from the path that the low fouling formation rate stream previously has passed, and inversely. As a result, secondly stream with high fouling formation rate mixes with residues of primary stream (low fouling formation rate stream). Therefore we should consider to adoption and conformability of streams structures (for prevention of streams destruction) and thermal considerations (for desirable heat transfer). Outlet temperatures of hot and cold streams should state in predefined temperatures. For satisfying thermal consideration after streams replacement this approach can be used in plants that cleanliness and its operational costs are most important problem.

  7. Influence of the Gulf Stream on the troposphere.

    Science.gov (United States)

    Minobe, Shoshiro; Kuwano-Yoshida, Akira; Komori, Nobumasa; Xie, Shang-Ping; Small, Richard Justin

    2008-03-13

    The Gulf Stream transports large amounts of heat from the tropics to middle and high latitudes, and thereby affects weather phenomena such as cyclogenesis and low cloud formation. But its climatic influence, on monthly and longer timescales, remains poorly understood. In particular, it is unclear how the warm current affects the free atmosphere above the marine atmospheric boundary layer. Here we consider the Gulf Stream's influence on the troposphere, using a combination of operational weather analyses, satellite observations and an atmospheric general circulation model. Our results reveal that the Gulf Stream affects the entire troposphere. In the marine boundary layer, atmospheric pressure adjustments to sharp sea surface temperature gradients lead to surface wind convergence, which anchors a narrow band of precipitation along the Gulf Stream. In this rain band, upward motion and cloud formation extend into the upper troposphere, as corroborated by the frequent occurrence of very low cloud-top temperatures. These mechanisms provide a pathway by which the Gulf Stream can affect the atmosphere locally, and possibly also in remote regions by forcing planetary waves. The identification of this pathway may have implications for our understanding of the processes involved in climate change, because the Gulf Stream is the upper limb of the Atlantic meridional overturning circulation, which has varied in strength in the past and is predicted to weaken in response to human-induced global warming in the future.

  8. Moving Forward: A Feminist Analysis of Mobile Music Streaming

    Directory of Open Access Journals (Sweden)

    Ann Werner

    2015-06-01

    Full Text Available The importance of understanding gender, space and mobility as co-constructed in public space has been emphasized by feminist researchers (Massey 2005, Hanson 2010. And within feminist theory materiality, affect and emotions has been described as central for experienced subjectivity (Ahmed 2012. Music listening while moving through public space has previously been studied as a way of creating a private auditory bubble for the individual (Bull 2000, Cahir and Werner 2013 and in this article feminist theory on emotion (Ahmed 2010 and space (Massey 2005 is employed in order to understand mobile music streaming. More specifically it discusses what can happen when mobile media technology is used to listen to music in public space and investigates interconnectedness of bodies, music, technology and space. The article is based on autoethnographic material of mobile music streaming in public and concludes that a forward movement shaped by happiness is a desired result of mobile music streaming. The valuing of "forward" is critically examined from the point of feminist theory and the failed music listening moments are also discussed in terms of emotion and space.

  9. Tracking Gendered Streams

    Directory of Open Access Journals (Sweden)

    Maria Eriksson

    2017-10-01

    Full Text Available One of the most prominent features of digital music services is the provision of personalized music recommendations that come about through the profiling of users and audiences. Based on a range of "bot experiments," this article investigates if, and how, gendered patterns in music recommendations are provided by the streaming service Spotify. While our experiments did not give any strong indications that Spotify assigns different taste profiles to male and female users, the study showed that male artists were highly overrepresented in Spotify's music recommendations; an issue which we argue prompts users to cite hegemonic masculine norms within the music industries. Although the results should be approached as historically and contextually contingent, we argue that they point to how gender and gendered tastes may be constituted through the interplay between users and algorithmic knowledge-making processes, and how digital content delivery may maintain and challenge gender relations and gendered power differentials within the music industries. Seen through the lens of critical research on software, music and gender performativity, the experiments thus provide insights into how gender is shaped and attributed meaning as it materializes in contemporary music streams.

  10. The LHCb Turbo stream

    CERN Document Server

    AUTHOR|(CDS)2070171

    2016-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 wi...

  11. Stream Lifetimes Against Planetary Encounters

    Science.gov (United States)

    Valsecchi, G. B.; Lega, E.; Froeschle, Cl.

    2011-01-01

    We study, both analytically and numerically, the perturbation induced by an encounter with a planet on a meteoroid stream. Our analytical tool is the extension of pik s theory of close encounters, that we apply to streams described by geocentric variables. The resulting formulae are used to compute the rate at which a stream is dispersed by planetary encounters into the sporadic background. We have verified the accuracy of the analytical model using a numerical test.

  12. Denitration of Savannah River Plant waste streams

    International Nuclear Information System (INIS)

    Orebaugh, E.G.

    1976-07-01

    Partial denitration of waste streams from Savannah River Plant separations processes was shown to significantly reduce the quantity of waste solids to be stored as an alkaline salt cake. The chemical processes involved in the denitration of nonradioactive simulated waste solutions were studied. Chemical and instrumental analytical techniques were used to define both the equilibrium concentrations and the variation of reactants and products in the denitration reaction. Mechanisms were proposed that account for the complicated chemical reactions observed in the simulated waste solutions. Metal nitrates can be denitrated by reaction with formic acid only by the release of nitric acid from hydrolysis or formate complexation of metal cations. However, eventual radiolysis of formate salts or complexes results in the formation of biocarbonate and makes complexation-denitration a nonproductive means of reducing waste solids. Nevertheless, destruction of nitrate associated with free acid and easily hydrolyzable cations such as iron, mercury, and zirconium can result in greater than 30 percent reduction in waste solids from five SRP waste streams

  13. Morphology of a Wetland Stream

    Science.gov (United States)

    Jurmu; Andrle

    1997-11-01

    / Little attention has been paid to wetland stream morphology in the geomorphological and environmental literature, and in the recently expanding wetland reconstruction field, stream design has been based primarily on stream morphologies typical of nonwetland alluvial environments. Field investigation of a wetland reach of Roaring Brook, Stafford, Connecticut, USA, revealed several significant differences between the morphology of this stream and the typical morphology of nonwetland alluvial streams. Six morphological features of the study reach were examined: bankfull flow, meanders, pools and riffles, thalweg location, straight reaches, and cross-sectional shape. It was found that bankfull flow definitions originating from streams in nonwetland environments did not apply. Unusual features observed in the wetland reach include tight bends and a large axial wavelength to width ratio. A lengthy straight reach exists that exceeds what is typically found in nonwetland alluvial streams. The lack of convex bank point bars in the bends, a greater channel width at riffle locations, an unusual thalweg location, and small form ratios (a deep and narrow channel) were also differences identified. Further study is needed on wetland streams of various regions to determine if differences in morphology between alluvial and wetland environments can be applied in order to improve future designs of wetland channels.KEY WORDS: Stream morphology; Wetland restoration; Wetland creation; Bankfull; Pools and riffles; Meanders; Thalweg

  14. Dual-stream accounts bridge the gap between monkey audition and human language processing. Comment on "Towards a Computational Comparative Neuroprimatology: Framing the language-ready brain" by Michael Arbib

    Science.gov (United States)

    Garrod, Simon; Pickering, Martin J.

    2016-03-01

    Over the last few years there has been a resurgence of interest in dual-stream dorsal-ventral accounts of language processing [4]. This has led to recent attempts to bridge the gap between the neurobiology of primate audition and human language processing with the dorsal auditory stream assumed to underlie time-dependent (and syntactic) processing and the ventral to underlie some form of time-independent (and semantic) analysis of the auditory input [3,10]. Michael Arbib [1] considers these developments in relation to his earlier Mirror System Hypothesis about the origins of human language processing [11].

  15. Transcriptional maturation of the mouse auditory forebrain.

    Science.gov (United States)

    Hackett, Troy A; Guo, Yan; Clause, Amanda; Hackett, Nicholas J; Garbett, Krassimira; Zhang, Pan; Polley, Daniel B; Mirnics, Karoly

    2015-08-14

    The maturation of the brain involves the coordinated expression of thousands of genes, proteins and regulatory elements over time. In sensory pathways, gene expression profiles are modified by age and sensory experience in a manner that differs between brain regions and cell types. In the auditory system of altricial animals, neuronal activity increases markedly after the opening of the ear canals, initiating events that culminate in the maturation of auditory circuitry in the brain. This window provides a unique opportunity to study how gene expression patterns are modified by the onset of sensory experience through maturity. As a tool for capturing these features, next-generation sequencing of total RNA (RNAseq) has tremendous utility, because the entire transcriptome can be screened to index expression of any gene. To date, whole transcriptome profiles have not been generated for any central auditory structure in any species at any age. In the present study, RNAseq was used to profile two regions of the mouse auditory forebrain (A1, primary auditory cortex; MG, medial geniculate) at key stages of postnatal development (P7, P14, P21, adult) before and after the onset of hearing (~P12). Hierarchical clustering, differential expression, and functional geneset enrichment analyses (GSEA) were used to profile the expression patterns of all genes. Selected genesets related to neurotransmission, developmental plasticity, critical periods and brain structure were highlighted. An accessible repository of the entire dataset was also constructed that permits extraction and screening of all data from the global through single-gene levels. To our knowledge, this is the first whole transcriptome sequencing study of the forebrain of any mammalian sensory system. Although the data are most relevant for the auditory system, they are generally applicable to forebrain structures in the visual and somatosensory systems, as well. The main findings were: (1) Global gene expression

  16. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  17. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams

    Directory of Open Access Journals (Sweden)

    Yi-Huang eSu

    2014-12-01

    Full Text Available Both lower-level stimulus factors (e.g., temporal proximity and higher-level cognitive factors (e.g., content congruency are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently or upwards (incongruently to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  18. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams.

    Science.gov (United States)

    Su, Yi-Huang

    2014-01-01

    Both lower-level stimulus factors (e.g., temporal proximity) and higher-level cognitive factors (e.g., content congruency) are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual (AV) streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently) or upwards (incongruently) to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  19. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Analyzing indicators of stream health for Minnesota streams

    Science.gov (United States)

    Singh, U.; Kocian, M.; Wilson, B.; Bolton, A.; Nieber, J.; Vondracek, B.; Perry, J.; Magner, J.

    2005-01-01

    Recent research has emphasized the importance of using physical, chemical, and biological indicators of stream health for diagnosing impaired watersheds and their receiving water bodies. A multidisciplinary team at the University of Minnesota is carrying out research to develop a stream classification system for Total Maximum Daily Load (TMDL) assessment. Funding for this research is provided by the United States Environmental Protection Agency and the Minnesota Pollution Control Agency. One objective of the research study involves investigating the relationships between indicators of stream health and localized stream characteristics. Measured data from Minnesota streams collected by various government and non-government agencies and research institutions have been obtained for the research study. Innovative Geographic Information Systems tools developed by the Environmental Science Research Institute and the University of Texas are being utilized to combine and organize the data. Simple linear relationships between index of biological integrity (IBI) and channel slope, two-year stream flow, and drainage area are presented for the Redwood River and the Snake River Basins. Results suggest that more rigorous techniques are needed to successfully capture trends in IBI scores. Additional analyses will be done using multiple regression, principal component analysis, and clustering techniques. Uncovering key independent variables and understanding how they fit together to influence stream health are critical in the development of a stream classification for TMDL assessment.

  1. Mutual Information Based Dynamic Integration of Multiple Feature Streams for Robust Real-Time LVCSR

    Science.gov (United States)

    Sato, Shoei; Kobayashi, Akio; Onoe, Kazuo; Homma, Shinichi; Imai, Toru; Takagi, Tohru; Kobayashi, Tetsunori

    We present a novel method of integrating the likelihoods of multiple feature streams, representing different acoustic aspects, for robust speech recognition. The integration algorithm dynamically calculates a frame-wise stream weight so that a higher weight is given to a stream that is robust to a variety of noisy environments or speaking styles. Such a robust stream is expected to show discriminative ability. A conventional method proposed for the recognition of spoken digits calculates the weights front the entropy of the whole set of HMM states. This paper extends the dynamic weighting to a real-time large-vocabulary continuous speech recognition (LVCSR) system. The proposed weight is calculated in real-time from mutual information between an input stream and active HMM states in a searchs pace without an additional likelihood calculation. Furthermore, the mutual information takes the width of the search space into account by calculating the marginal entropy from the number of active states. In this paper, we integrate three features that are extracted through auditory filters by taking into account the human auditory system's ability to extract amplitude and frequency modulations. Due to this, features representing energy, amplitude drift, and resonant frequency drifts, are integrated. These features are expected to provide complementary clues for speech recognition. Speech recognition experiments on field reports and spontaneous commentary from Japanese broadcast news showed that the proposed method reduced error words by 9.2% in field reports and 4.7% in spontaneous commentaries relative to the best result obtained from a single stream.

  2. Dopamine modulates memory consolidation of discrimination learning in the auditory cortex.

    Science.gov (United States)

    Schicknick, Horst; Reichenbach, Nicole; Smalla, Karl-Heinz; Scheich, Henning; Gundelfinger, Eckart D; Tischmeyer, Wolfgang

    2012-03-01

    In Mongolian gerbils, the auditory cortex is critical for discriminating rising vs. falling frequency-modulated tones. Based on our previous studies, we hypothesized that dopaminergic inputs to the auditory cortex during and shortly after acquisition of the discrimination strategy control long-term memory formation. To test this hypothesis, we studied frequency-modulated tone discrimination learning of gerbils in a shuttle box GO/NO-GO procedure following differential treatments. (i) Pre-exposure of gerbils to the frequency-modulated tones at 1 day before the first discrimination training session severely impaired the accuracy of the discrimination acquired in that session during the initial trials of a second training session, performed 1 day later. (ii) Local injection of the D1/D5 dopamine receptor antagonist SCH-23390 into the auditory cortex after task acquisition caused a discrimination deficit of similar extent and time course as with pre-exposure. This effect was dependent on the dose and time point of injection. (iii) Injection of the D1/D5 dopamine receptor agonist SKF-38393 into the auditory cortex after retraining caused a further discrimination improvement at the beginning of subsequent sessions. All three treatments, which supposedly interfered with dopamine signalling during conditioning and/or retraining, had a substantial impact on the dynamics of the discrimination performance particularly at the beginning of subsequent training sessions. These findings suggest that auditory-cortical dopamine activity after acquisition of a discrimination of complex sounds and after retrieval of weak frequency-modulated tone discrimination memory further improves memory consolidation, i.e. the correct association of two sounds with their respective GO/NO-GO meaning, in support of future memory recall. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  3. Streaming movies, media, and instant access

    CERN Document Server

    Dixon, Wheeler Winston

    2013-01-01

    Film stocks are vanishing, but the iconic images of the silver screen remain -- albeit in new, sleeker formats. Today, viewers can instantly stream movies on televisions, computers, and smartphones. Gone are the days when films could only be seen in theaters or rented at video stores: movies are now accessible at the click of a button, and there are no reels, tapes, or discs to store. Any film or show worth keeping may be collected in the virtual cloud and accessed at will through services like Netflix, Hulu, and Amazon Instant.The movies have changed, and we are changing with them.

  4. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; Nikaido, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  5. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    Difficulty perceiving music is often cited as one of the main problems facing hearing-impaired listeners. It has been suggested that musical enjoyment could be enhanced if sound information absent due to impairment is transmitted via other sensory modalities such as vision or touch. In this study...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...... was always better; however, the magnitude of improvement with the introduction of tactile cues was similar in both groups. This study suggests that hearing-impaired listeners could potentially benefit from a system transmitting such information via a tactile modality...

  6. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    Science.gov (United States)

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  7. Human impacts to mountain streams

    Science.gov (United States)

    Wohl, Ellen

    2006-09-01

    Mountain streams are here defined as channel networks within mountainous regions of the world. This definition encompasses tremendous diversity of physical and biological conditions, as well as history of land use. Human effects on mountain streams may result from activities undertaken within the stream channel that directly alter channel geometry, the dynamics of water and sediment movement, contaminants in the stream, or aquatic and riparian communities. Examples include channelization, construction of grade-control structures or check dams, removal of beavers, and placer mining. Human effects can also result from activities within the watershed that indirectly affect streams by altering the movement of water, sediment, and contaminants into the channel. Deforestation, cropping, grazing, land drainage, and urbanization are among the land uses that indirectly alter stream processes. An overview of the relative intensity of human impacts to mountain streams is provided by a table summarizing human effects on each of the major mountainous regions with respect to five categories: flow regulation, biotic integrity, water pollution, channel alteration, and land use. This table indicates that very few mountains have streams not at least moderately affected by land use. The least affected mountainous regions are those at very high or very low latitudes, although our scientific ignorance of conditions in low-latitude mountains in particular means that streams in these mountains might be more altered than is widely recognized. Four case studies from northern Sweden (arctic region), Colorado Front Range (semiarid temperate region), Swiss Alps (humid temperate region), and Papua New Guinea (humid tropics) are also used to explore in detail the history and effects on rivers of human activities in mountainous regions. The overview and case studies indicate that mountain streams must be managed with particular attention to upstream/downstream connections, hillslope

  8. Evolution and interaction of large interplanetary streams

    International Nuclear Information System (INIS)

    Whang, Y.C.; Burlaga, L.F.

    1985-02-01

    A computer simulation for the evolution and interaction of large interplanetary streams based on multi-spacecraft observations and an unsteady, one-dimensional MHD model is presented. Two events, each observed by two or more spacecraft separated by a distance of the order of 10 AU, were studied. The first simulation is based on the plasma and magnetic field observations made by two radially-aligned spacecraft. The second simulation is based on an event observed first by Helios-1 in May 1980 near 0.6 AU and later by Voyager-1 in June 1980 at 8.1 AU. These examples show that the dynamical evolution of large-scale solar wind structures is dominated by the shock process, including the formation, collision, and merging of shocks. The interaction of shocks with stream structures also causes a drastic decrease in the amplitude of the solar wind speed variation with increasing heliocentric distance, and as a result of interactions there is a large variation of shock-strengths and shock-speeds. The simulation results shed light on the interpretation for the interaction and evolution of large interplanetary streams. Observations were made along a few limited trajectories, but simulation results can supplement these by providing the detailed evolution process for large-scale solar wind structures in the vast region not directly observed. The use of a quantitative nonlinear simulation model including shock merging process is crucial in the interpretation of data obtained in the outer heliosphere

  9. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex.

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-02-16

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.

  10. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  11. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  12. Do dyslexics have auditory input processing difficulties?

    DEFF Research Database (Denmark)

    Poulsen, Mads

    2011-01-01

    Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether....... The finding suggests that input processing difficulties are associated with the phonological deficit, but that these difficulties may be stronger above the level of phoneme perception.......Word production difficulties are well documented in dyslexia, whereas the results are mixed for receptive phonological processing. This asymmetry raises the possibility that the core phonological deficit of dyslexia is restricted to output processing stages. The present study investigated whether...... a group of dyslexics had word level receptive difficulties using an auditory lexical decision task with long words and nonsense words. The dyslexics were slower and less accurate than chronological age controls in an auditory lexical decision task, with disproportionate low performance on nonsense words...

  13. Cognitive mechanisms associated with auditory sensory gating

    Science.gov (United States)

    Jones, L.A.; Hills, P.J.; Dick, K.M.; Jones, S.P.; Bright, P.

    2016-01-01

    Sensory gating is a neurophysiological measure of inhibition that is characterised by a reduction in the P50 event-related potential to a repeated identical stimulus. The objective of this work was to determine the cognitive mechanisms that relate to the neurological phenomenon of auditory sensory gating. Sixty participants underwent a battery of 10 cognitive tasks, including qualitatively different measures of attentional inhibition, working memory, and fluid intelligence. Participants additionally completed a paired-stimulus paradigm as a measure of auditory sensory gating. A correlational analysis revealed that several tasks correlated significantly with sensory gating. However once fluid intelligence and working memory were accounted for, only a measure of latent inhibition and accuracy scores on the continuous performance task showed significant sensitivity to sensory gating. We conclude that sensory gating reflects the identification of goal-irrelevant information at the encoding (input) stage and the subsequent ability to selectively attend to goal-relevant information based on that previous identification. PMID:26716891

  14. Central auditory neurons have composite receptive fields.

    Science.gov (United States)

    Kozlov, Andrei S; Gentner, Timothy Q

    2016-02-02

    High-level neurons processing complex, behaviorally relevant signals are sensitive to conjunctions of features. Characterizing the receptive fields of such neurons is difficult with standard statistical tools, however, and the principles governing their organization remain poorly understood. Here, we demonstrate multiple distinct receptive-field features in individual high-level auditory neurons in a songbird, European starling, in response to natural vocal signals (songs). We then show that receptive fields with similar characteristics can be reproduced by an unsupervised neural network trained to represent starling songs with a single learning rule that enforces sparseness and divisive normalization. We conclude that central auditory neurons have composite receptive fields that can arise through a combination of sparseness and normalization in neural circuits. Our results, along with descriptions of random, discontinuous receptive fields in the central olfactory neurons in mammals and insects, suggest general principles of neural computation across sensory systems and animal classes.

  15. Industrial-Strength Streaming Video.

    Science.gov (United States)

    Avgerakis, George; Waring, Becky

    1997-01-01

    Corporate training, financial services, entertainment, and education are among the top applications for streaming video servers, which send video to the desktop without downloading the whole file to the hard disk, saving time and eliminating copyrights questions. Examines streaming video technology, lists ten tips for better net video, and ranks…

  16. Data streams: algorithms and applications

    National Research Council Canada - National Science Library

    Muthukrishnan, S

    2005-01-01

    ... massive data sets in general. Researchers in Theoretical Computer Science, Databases, IP Networking and Computer Systems are working on the data stream challenges. This article is an overview and survey of data stream algorithmics and is an updated version of [175]. S. Muthukrishnan Rutgers University, New Brunswick, NJ, USA, muthu@cs...

  17. What Can Hierarchies Do for Data Streams?

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    Much effort has been put into building data streams management systems for querying data streams. Here, data streams have been viewed as a flow of low-level data items, e.g., sensor readings or IP packet data. Stream query languages have mostly been SQL-based, with the STREAM and TelegraphCQ lang...

  18. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  19. Bioacoustic Signal Classification in Cat Auditory Cortex

    Science.gov (United States)

    1994-01-01

    of the cat’s WINER. 1. A. Anatomy of layer IV in cat primary auditory cortex t4,1). J miedial geniculate body Ideintified by projections to binaural...34language" (see for example Tartter, 1986, chapter 8; and Lieberman, 1984). Attempts have been made to train animals (mainly apes, gorillas , _ _ ___I 3...gestures of a gorilla : Language acquisition in another Pongid. Brain and Language, 1978a, 5, 72-97. Patterson, F. Conversations with a gorilla

  20. Heavy metal contamination in stream water and sediments of gold ...

    African Journals Online (AJOL)

    I.O.OLABANJI

    3D) with 0.457 ± 0.061 and 0.364 ± 0.056 mg/L in dry and wet seasons. The mean .... safe limit clearly indicating that Cd contamination of the stream water might be ... of lead contaminant in the study area is the formation of acid mine drainage.

  1. Stream Deniable-Encryption Algorithms

    Directory of Open Access Journals (Sweden)

    N.A. Moldovyan

    2016-04-01

    Full Text Available A method for stream deniable encryption of secret message is proposed, which is computationally indistinguishable from the probabilistic encryption of some fake message. The method uses generation of two key streams with some secure block cipher. One of the key streams is generated depending on the secret key and the other one is generated depending on the fake key. The key streams are mixed with the secret and fake data streams so that the output ciphertext looks like the ciphertext produced by some probabilistic encryption algorithm applied to the fake message, while using the fake key. When the receiver or/and sender of the ciphertext are coerced to open the encryption key and the source message, they open the fake key and the fake message. To disclose their lie the coercer should demonstrate possibility of the alternative decryption of the ciphertext, however this is a computationally hard problem.

  2. Stream Clustering of Growing Objects

    Science.gov (United States)

    Siddiqui, Zaigham Faraz; Spiliopoulou, Myra

    We study incremental clustering of objects that grow and accumulate over time. The objects come from a multi-table stream e.g. streams of Customer and Transaction. As the Transactions stream accumulates, the Customers’ profiles grow. First, we use an incremental propositionalisation to convert the multi-table stream into a single-table stream upon which we apply clustering. For this purpose, we develop an online version of K-Means algorithm that can handle these swelling objects and any new objects that arrive. The algorithm also monitors the quality of the model and performs re-clustering when it deteriorates. We evaluate our method on the PKDD Challenge 1999 dataset.

  3. Explaining the high voice superiority effect in polyphonic music: evidence from cortical evoked potentials and peripheral auditory models.

    Science.gov (United States)

    Trainor, Laurel J; Marie, Céline; Bruce, Ian C; Bidelman, Gavin M

    2014-02-01

    Natural auditory environments contain multiple simultaneously-sounding objects and the auditory system must parse the incoming complex sound wave they collectively create into parts that represent each of these individual objects. Music often similarly requires processing of more than one voice or stream at the same time, and behavioral studies demonstrate that human listeners show a systematic perceptual bias in processing the highest voice in multi-voiced music. Here, we review studies utilizing event-related brain potentials (ERPs), which support the notions that (1) separate memory traces are formed for two simultaneous voices (even without conscious awareness) in auditory cortex and (2) adults show more robust encoding (i.e., larger ERP responses) to deviant pitches in the higher than in the lower voice, indicating better encoding of the former. Furthermore, infants also show this high-voice superiority effect, suggesting that the perceptual dominance observed across studies might result from neurophysiological characteristics of the peripheral auditory system. Although musically untrained adults show smaller responses in general than musically trained adults, both groups similarly show a more robust cortical representation of the higher than of the lower voice. Finally, years of experience playing a bass-range instrument reduces but does not reverse the high voice superiority effect, indicating that although it can be modified, it is not highly neuroplastic. Results of new modeling experiments examined the possibility that characteristics of middle-ear filtering and cochlear dynamics (e.g., suppression) reflected in auditory nerve firing patterns might account for the higher-voice superiority effect. Simulations show that both place and temporal AN coding schemes well-predict a high-voice superiority across a wide range of interval spacings and registers. Collectively, we infer an innate, peripheral origin for the higher-voice superiority observed in human

  4. Pilot-Streaming: Design Considerations for a Stream Processing Framework for High-Performance Computing

    OpenAIRE

    Andre Luckow; Peter Kasson; Shantenu Jha

    2016-01-01

    This White Paper (submitted to STREAM 2016) identifies an approach to integrate streaming data with HPC resources. The paper outlines the design of Pilot-Streaming, which extends the concept of Pilot-abstraction to streaming real-time data.

  5. Evidence of Reliability and Validity for a Children’s Auditory Continuous Performance Test

    Directory of Open Access Journals (Sweden)

    Michael J. Lasee

    2013-11-01

    Full Text Available Continuous Performance Tests (CPTs are commonly utilized clinical measures of attention and response inhibition. While there have been many studies of CPTs that utilize a visual format, there is considerably less research employing auditory CPTs. The current study provides initial reliability and validity evidence for the Auditory Vigilance Screening Measure (AVSM, a newly developed CPT. Participants included 105 five- to nine-year-old children selected from two rural Midwestern school districts. Reliability data for the AVSM was collected through retesting of 42 participants. Validity was evaluated through correlation of AVSM scales with subscales from the ADHD Rating Scale–IV. Test–retest reliability coefficients ranged from .62 to .74 for AVSM subscales. A significant (r = .31 correlation was obtained between the AVSM Impulsivity Scale and teacher ratings of inattention. Limitations and implications for future study are discussed.

  6. Modelling auditory attention: Insights from the Theory of Visual Attention (TVA)

    DEFF Research Database (Denmark)

    Roberts, K. L.; Andersen, Tobias; Kyllingsbæk, Søren

    modelled using a log-logistic function than an exponential function. A more challenging difference is that in the partial report task, there is more target-distractor confusion for auditory than visual stimuli. This failure of object-formation (prior to attentional object-selection) is not yet effectively......We report initial progress towards creating an auditory analogue of a mathematical model of visual attention: the ‘Theory of Visual Attention’ (TVA; Bundesen, 1990). TVA is one of the best established models of visual attention. It assumes that visual stimuli are initially processed in parallel......, and that there is a ‘race’ for selection and representation in visual short term memory (VSTM). In the basic TVA task, participants view a brief display of letters and are asked to report either all of the letters (whole report) or a subset of the letters (e.g., the red letters; partial report). Fitting the model...

  7. Nodular Fasciitis of External Auditory Canal

    Directory of Open Access Journals (Sweden)

    Jihyun Ahn

    2016-09-01

    Full Text Available Nodular fasciitis is a pseudosarcomatous reactive process composed of fibroblasts and myofibroblasts, and it is most common in the upper extremities. Nodular fasciitis of the external auditory canal is rare. To the best of our knowledge, less than 20 cases have been reported to date. We present a case of nodular fasciitis arising in the cartilaginous part of the external auditory canal. A 19-year-old man complained of an auricular mass with pruritus. Computed tomography showed a 1.7 cm sized soft tissue mass in the right external auditory canal, and total excision was performed. Histologic examination revealed spindle or stellate cells proliferation in a fascicular and storiform pattern. Lymphoid cells and erythrocytes were intermixed with tumor cells. The stroma was myxoid to hyalinized with a few microcysts. The tumor cells were immunoreactive for smooth muscle actin, but not for desmin, caldesmon, CD34, S-100, anaplastic lymphoma kinase, and cytokeratin. The patient has been doing well during the 1 year follow-up period.

  8. Binaural processing by the gecko auditory periphery.

    Science.gov (United States)

    Christensen-Dalsgaard, Jakob; Tang, Yezhong; Carr, Catherine E

    2011-05-01

    Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼ 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200-500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional.

  9. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  10. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  11. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  12. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  13. Brainstem auditory evoked potentials in horses

    Directory of Open Access Journals (Sweden)

    Juliana Almeida Nogueira da Gama

    2016-04-01

    Full Text Available ABSTRACT: The brainstem auditory evoked potential (BAEP evaluates the integrity of the auditory pathways to the brainstem. The aim of this study was to evoke BAEPs in 21 clinically normal horses. The animals were sedated with detomidine hydrochloride (0.013mg.kg-1 BW. Earphones were inserted and rarefaction clicks at 90 dB and noise masking at 40 dB were used. After performing the test, the latencies of waves (I, II, III, IV, and V and interpeaks(I-III, III-V, and I-V were identified. The mean latencies of the waves were as follows: wave I, 2.4 ms; wave II, 2.24 ms; wave III, 3.61ms; wave IV, 4.61ms; and wave V, 5.49ms. The mean latencies of the interpeaks were as follows: I-III, 1.37ms; III-V, 1.88ms; and I-V, 3.26ms. This is the first study using BAEPs in horses in Brazil, and the observed latencies will be used as normative data for the interpretation of tests performed on horses with changes related to auditory system or neurologic abnormalities.

  14. The effects of auditory enrichment on gorillas.

    Science.gov (United States)

    Robbins, Lindsey; Margulis, Susan W

    2014-01-01

    Several studies have demonstrated that auditory enrichment can reduce stereotypic behaviors in captive animals. The purpose of this study was to determine the relative effectiveness of three different types of auditory enrichment-naturalistic sounds, classical music, and rock music-in reducing stereotypic behavior displayed by Western lowland gorillas (Gorilla gorilla gorilla). Three gorillas (one adult male, two adult females) were observed at the Buffalo Zoo for a total of 24 hr per music trial. A control observation period, during which no sounds were presented, was also included. Each music trial consisted of a total of three weeks with a 1-week control period in between each music type. The results reveal a decrease in stereotypic behaviors from the control period to naturalistic sounds. The naturalistic sounds also affected patterns of several other behaviors including locomotion. In contrast, stereotypy increased in the presence of classical and rock music. These results suggest that auditory enrichment, which is not commonly used in zoos in a systematic way, can be easily utilized by keepers to help decrease stereotypic behavior, but the nature of the stimulus, as well as the differential responses of individual animals, need to be considered. © 2014 Wiley Periodicals, Inc.

  15. Dynamical modeling of tidal streams

    International Nuclear Information System (INIS)

    Bovy, Jo

    2014-01-01

    I present a new framework for modeling the dynamics of tidal streams. The framework consists of simple models for the initial action-angle distribution of tidal debris, which can be straightforwardly evolved forward in time. Taking advantage of the essentially one-dimensional nature of tidal streams, the transformation to position-velocity coordinates can be linearized and interpolated near a small number of points along the stream, thus allowing for efficient computations of a stream's properties in observable quantities. I illustrate how to calculate the stream's average location (its 'track') in different coordinate systems, how to quickly estimate the dispersion around its track, and how to draw mock stream data. As a generative model, this framework allows one to compute the full probability distribution function and marginalize over or condition it on certain phase-space dimensions as well as convolve it with observational uncertainties. This will be instrumental in proper data analysis of stream data. In addition to providing a computationally efficient practical tool for modeling the dynamics of tidal streams, the action-angle nature of the framework helps elucidate how the observed width of the stream relates to the velocity dispersion or mass of the progenitor, and how the progenitors of 'orphan' streams could be located. The practical usefulness of the proposed framework crucially depends on the ability to calculate action-angle variables for any orbit in any gravitational potential. A novel method for calculating actions, frequencies, and angles in any static potential using a single orbit integration is described in the Appendix.

  16. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  17. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  18. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  19. Across frequency processes involved in auditory detection of coloration

    DEFF Research Database (Denmark)

    Buchholz, Jörg; Kerketsos, P

    2008-01-01

    filterbank was designed to approximate auditory filter-shapes measured by Oxenham and Shera [JARO, 2003, 541-554], derived from forward masking data. The results of the present study demonstrate that a “purely” spectrum-based model approach can successfully describe auditory coloration detection even at high......When an early wall reflection is added to a direct sound, a spectral modulation is introduced to the signal's power spectrum. This spectral modulation typically produces an auditory sensation of coloration or pitch. Throughout this study, auditory spectral-integration effects involved in coloration...... detection are investigated. Coloration detection thresholds were therefore measured as a function of reflection delay and stimulus bandwidth. In order to investigate the involved auditory mechanisms, an auditory model was employed that was conceptually similar to the peripheral weighting model [Yost, JASA...

  20. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  1. Air pollution is associated with brainstem auditory nuclei pathology and delayed brainstem auditory evoked potentials.

    Science.gov (United States)

    Calderón-Garcidueñas, Lilian; D'Angiulli, Amedeo; Kulesza, Randy J; Torres-Jardón, Ricardo; Osnaya, Norma; Romero, Lina; Keefe, Sheyla; Herritt, Lou; Brooks, Diane M; Avila-Ramirez, Jose; Delgado-Chávez, Ricardo; Medina-Cortina, Humberto; González-González, Luis Oscar

    2011-06-01

    We assessed brainstem inflammation in children exposed to air pollutants by comparing brainstem auditory evoked potentials (BAEPs) and blood inflammatory markers in children age 96.3±8.5 months from highly polluted (n=34) versus a low polluted city (n=17). The brainstems of nine children with accidental deaths were also examined. Children from the highly polluted environment had significant delays in wave III (t(50)=17.038; p7.501; p<0.0001), consisting with delayed central conduction time of brainstem neural transmission. Highly exposed children showed significant evidence of inflammatory markers and their auditory and vestibular nuclei accumulated α synuclein and/or β amyloid(1-42). Medial superior olive neurons, critically involved in BAEPs, displayed significant pathology. Children's exposure to urban air pollution increases their risk for auditory and vestibular impairment. Copyright © 2011 ISDN. Published by Elsevier Ltd. All rights reserved.

  2. Beneficial auditory and cognitive effects of auditory brainstem implantation in children.

    Science.gov (United States)

    Colletti, Liliana

    2007-09-01

    This preliminary study demonstrates the development of hearing ability and shows that there is a significant improvement in some cognitive parameters related to selective visual/spatial attention and to fluid or multisensory reasoning, in children fitted with auditory brainstem implantation (ABI). The improvement in cognitive paramenters is due to several factors, among which there is certainly, as demonstrated in the literature on a cochlear implants (CIs), the activation of the auditory sensory canal, which was previously absent. The findings of the present study indicate that children with cochlear or cochlear nerve abnormalities with associated cognitive deficits should not be excluded from ABI implantation. The indications for ABI have been extended over the last 10 years to adults with non-tumoral (NT) cochlear or cochlear nerve abnormalities that cannot benefit from CI. We demonstrated that the ABI with surface electrodes may provide sufficient stimulation of the central auditory system in adults for open set speech recognition. These favourable results motivated us to extend ABI indications to children with profound hearing loss who were not candidates for a CI. This study investigated the performances of young deaf children undergoing ABI, in terms of their auditory perceptual development and their non-verbal cognitive abilities. In our department from 2000 to 2006, 24 children aged 14 months to 16 years received an ABI for different tumour and non-tumour diseases. Two children had NF2 tumours. Eighteen children had bilateral cochlear nerve aplasia. In this group, nine children had associated cochlear malformations, two had unilateral facial nerve agenesia and two had combined microtia, aural atresia and middle ear malformations. Four of these children had previously been fitted elsewhere with a CI with no auditory results. One child had bilateral incomplete cochlear partition (type II); one child, who had previously been fitted unsuccessfully elsewhere

  3. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory

    Science.gov (United States)

    Kraus, Nina; Strait, Dana; Parbery-Clark, Alexandra

    2012-01-01

    Musicians benefit from real-life advantages such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians’ auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. PMID:22524346

  4. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2008-01-01

    In the present study, a novel multichannel loudspeaker-based virtual auditory environment (VAE) is introduced. The VAE aims at providing a versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room...... reverberation. The environment is based on the ODEON room acoustic simulation software to render the acoustical scene. ODEON outputs are processed using a combination of different order Ambisonic techniques to calculate multichannel room impulse responses (mRIR). Auralization is then obtained by the convolution...... the VAE development, special care was taken in order to achieve a realistic auditory percept and to avoid “artifacts” such as unnatural coloration. The performance of the VAE has been evaluated and optimized on a 29 loudspeaker setup using both objective and subjective measurement techniques....

  5. Headwater Stream Management Dichotomies: Local Amphibian Habitat vs. Downstream Fish Habitat

    Science.gov (United States)

    Jackson, C. R.

    2002-12-01

    Small headwater streams in mountainous areas of the Pacific Northwest often do not harbor fish populations because of low water depth and high gradients. Rather, these streams provide habitat for dense assemblages of stream-dwelling amphibians. A variety of management goals have been suggested for such streams such as encouraging large woody debris recruitment to assist in sediment trapping and valley floor formation, encouraging large woody debris recruitment to provide downstream wood when debris flows occur, providing continuous linear stream buffers within forest harvest areas to provide shade and bank stability, etc. A basic problem with analying the geomorphic or biotic benefits of any of these strategies is the lack of explicit management goals for such streams. Should managers strive to optimize downstream fish habitat, local amphibian habitat, or both? Through observational data and theoretical considerations, it will be shown that these biotic goals will lead to very different geomorphic management recommendations. For instance, woody debris greater than 60 cm diameter may assist in valley floor development, but it is likely to create subsurface channel flow of unknown value to amphibians. Trapping and retention of fine sediments within headwater streams may improve downstream spawning gravels, but degrades stream-dwelling amphibian habitat. In response to the need for descriptive information on habitat and channel morphology specific to small, non-fish-bearing streams in the Pacific Northwest, morphologies and wood frequencies in forty-two first- and second-order forested streams less than four meters wide were surveyed. Frequencies and size distributions of woody debris were compared between small streams and larger fish-bearing streams as well as between second-growth and virgin timber streams. Statistical models were developed to explore dominant factors affecting channel morphology and habitat. Findings suggest geomorphological relationships

  6. Sensorineural hearing loss degrades behavioral and physiological measures of human spatial selective auditory attention

    Science.gov (United States)

    Dai, Lengshi; Best, Virginia; Shinn-Cunningham, Barbara G.

    2018-01-01

    Listeners with sensorineural hearing loss often have trouble understanding speech amid other voices. While poor spatial hearing is often implicated, direct evidence is weak; moreover, studies suggest that reduced audibility and degraded spectrotemporal coding may explain such problems. We hypothesized that poor spatial acuity leads to difficulty deploying selective attention, which normally filters out distracting sounds. In listeners with normal hearing, selective attention causes changes in the neural responses evoked by competing sounds, which can be used to quantify the effectiveness of attentional control. Here, we used behavior and electroencephalography to explore whether control of selective auditory attention is degraded in hearing-impaired (HI) listeners. Normal-hearing (NH) and HI listeners identified a simple melody presented simultaneously with two competing melodies, each simulated from different lateral angles. We quantified performance and attentional modulation of cortical responses evoked by these competing streams. Compared with NH listeners, HI listeners had poorer sensitivity to spatial cues, performed more poorly on the selective attention task, and showed less robust attentional modulation of cortical responses. Moreover, across NH and HI individuals, these measures were correlated. While both groups showed cortical suppression of distracting streams, this modulation was weaker in HI listeners, especially when attending to a target at midline, surrounded by competing streams. These findings suggest that hearing loss interferes with the ability to filter out sound sources based on location, contributing to communication difficulties in social situations. These findings also have implications for technologies aiming to use neural signals to guide hearing aid processing. PMID:29555752

  7. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Maojin Liang

    2017-10-01

    Full Text Available Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP and ten were poor (PCP. Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC, with a downward trend in the primary auditory cortex (PAC activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls before CI use (0M and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  8. Visually Evoked Visual-Auditory Changes Associated with Auditory Performance in Children with Cochlear Implants.

    Science.gov (United States)

    Liang, Maojin; Zhang, Junpeng; Liu, Jiahao; Chen, Yuebo; Cai, Yuexin; Wang, Xianjun; Wang, Junbo; Zhang, Xueyuan; Chen, Suijun; Li, Xianghui; Chen, Ling; Zheng, Yiqing

    2017-01-01

    Activation of the auditory cortex by visual stimuli has been reported in deaf children. In cochlear implant (CI) patients, a residual, more intense cortical activation in the frontotemporal areas in response to photo stimuli was found to be positively associated with poor auditory performance. Our study aimed to investigate the mechanism by which visual processing in CI users activates the auditory-associated cortex during the period after cochlear implantation as well as its relation to CI outcomes. Twenty prelingually deaf children with CI were recruited. Ten children were good CI performers (GCP) and ten were poor (PCP). Ten age- and sex- matched normal-hearing children were recruited as controls, and visual evoked potentials (VEPs) were recorded. The characteristics of the right frontotemporal N1 component were analyzed. In the prelingually deaf children, higher N1 amplitude was observed compared to normal controls. While the GCP group showed significant decreases in N1 amplitude, and source analysis showed the most significant decrease in brain activity was observed in the primary visual cortex (PVC), with a downward trend in the primary auditory cortex (PAC) activity, but these did not occur in the PCP group. Meanwhile, higher PVC activation (comparing to controls) before CI use (0M) and a significant decrease in source energy after CI use were found to be related to good CI outcomes. In the GCP group, source energy decreased in the visual-auditory cortex with CI use. However, no significant cerebral hemispheric dominance was found. We supposed that intra- or cross-modal reorganization and higher PVC activation in prelingually deaf children may reflect a stronger potential ability of cortical plasticity. Brain activity evolution appears to be related to CI auditory outcomes.

  9. The impact of visual gaze direction on auditory object tracking

    OpenAIRE

    Pomper, U.; Chait, M.

    2017-01-01

    Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention wh...

  10. Fundamental deficits of auditory perception in Wernicke's aphasia.

    Science.gov (United States)

    Robson, Holly; Grube, Manon; Lambon Ralph, Matthew A; Griffiths, Timothy D; Sage, Karen

    2013-01-01

    This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Magnetic resonance imaging of the internal auditory canal

    International Nuclear Information System (INIS)

    Daniels, D.L.; Herfkins, R.; Koehler, P.R.; Millen, S.J.; Shaffer, K.A.; Williams, A.L.; Haughton, V.M.

    1984-01-01

    Three patients with exclusively or predominantly intracanalicular neuromas and 5 with presumably normal internal auditory canals were examined with prototype 1.4- or 1.5-tesla magnetic resonance (MR) scanners. MR images showed the 7th and 8th cranial nerves in the internal auditory canal. The intracanalicular neuromas had larger diameter and slightly greater signal strength than the nerves. Early results suggest that minimal enlargement of the nerves can be detected even in the internal auditory canal

  12. Using Facebook to Reach People Who Experience Auditory Hallucinations

    OpenAIRE

    Crosier, Benjamin Sage; Brian, Rachel Marie; Ben-Zeev, Dror

    2016-01-01

    Background Auditory hallucinations (eg, hearing voices) are relatively common and underreported false sensory experiences that may produce distress and impairment. A large proportion of those who experience auditory hallucinations go unidentified and untreated. Traditional engagement methods oftentimes fall short in reaching the diverse population of people who experience auditory hallucinations. Objective The objective of this proof-of-concept study was to examine the viability of leveraging...

  13. Depth-Dependent Temporal Response Properties in Core Auditory Cortex

    OpenAIRE

    Christianson, G. Björn; Sahani, Maneesh; Linden, Jennifer F.

    2011-01-01

    The computational role of cortical layers within auditory cortex has proven difficult to establish. One hypothesis is that interlaminar cortical processing might be dedicated to analyzing temporal properties of sounds; if so, then there should be systematic depth-dependent changes in cortical sensitivity to the temporal context in which a stimulus occurs. We recorded neural responses simultaneously across cortical depth in primary auditory cortex and anterior auditory field of CBA/Ca mice, an...

  14. CT of the external auditory canal: Correlation with clinical otoscopy

    International Nuclear Information System (INIS)

    Shankar, L.; Hawke, M.; Leekam, R.N.

    1987-01-01

    CT is the modality of choice in the assessment of external auditory canal abnormalities. Disorders of the complex structures within the ear that may be difficult to define clinically are well visualized on high-resolution CT. This exhibit illustrates various external auditory canal abnormalities and correlates these with color illustrations from clinical otoscopy. Congenital lesions of the external auditory canal - microtia, temporo-bandibular joint herniation, and fistulas - and various acquired lesions - traumatic, inflammatory, and neoplastic - are reviewed in this exhibit

  15. Auditory and visual memory in musicians and nonmusicians

    OpenAIRE

    Cohen, Michael A.; Evans, Karla K.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2011-01-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory ...

  16. Auditory recognition memory is inferior to visual recognition memory

    OpenAIRE

    Cohen, Michael A.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2009-01-01

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, h...

  17. Influence of Gully Erosion Control on Amphibian and Reptile Communities within Riparian Zones of Channelized Streams

    Science.gov (United States)

    Riparian zones of streams in northwestern Mississippi have been impacted by agriculture, channelization, channel incision, and gully erosion. Riparian gully formation has resulted in the fragmentation of remnant riparian zones within agricultural watersheds. One widely used conservation practice for...

  18. Percent Forest Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  19. Stream Habitat Reach Summary - NCWAP [ds158

    Data.gov (United States)

    California Natural Resource Agency — The Stream Habitat - NCWAP - Reach Summary [ds158] shapefile contains in-stream habitat survey data summarized to the stream reach level. It is a derivative of the...

  20. Percent Agriculture Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  1. Widespread auditory deficits in tune deafness.

    Science.gov (United States)

    Jones, Jennifer L; Zalewski, Christopher; Brewer, Carmen; Lucker, Jay; Drayna, Dennis

    2009-02-01

    The goal of this study was to investigate auditory function in individuals with deficits in musical pitch perception. We hypothesized that such individuals have deficits in nonspeech areas of auditory processing. We screened 865 randomly selected individuals to identify those who scored poorly on the Distorted Tunes test (DTT), a measure of musical pitch recognition ability. Those who scored poorly were given a comprehensive audiologic examination, and those with hearing loss or other confounding audiologic factors were excluded from further testing. Thirty-five individuals with tune deafness constituted the experimental group. Thirty-four individuals with normal hearing and normal DTT scores, matched for age, gender, handedness, and education, and without overt or reported psychiatric disorders made up the normal control group. Individual and group performance for pure-tone frequency discrimination at 1000 Hz was determined by measuring the difference limen for frequency (DLF). Auditory processing abilities were assessed using tests of pitch pattern recognition, duration pattern recognition, and auditory gap detection. In addition, we evaluated both attention and short- and long-term memory as variables that might influence performance on our experimental measures. Differences between groups were evaluated statistically using Wilcoxon nonparametric tests and t-tests as appropriate. The DLF at 1000 Hz in the group with tune deafness was significantly larger than that of the normal control group. However, approximately one-third of participants with tune deafness had DLFs within the range of performance observed in the control group. Many individuals with tune deafness also displayed a high degree of variability in their intertrial frequency discrimination performance that could not be explained by deficits in memory or attention. Pitch and duration pattern discrimination and auditory gap-detection ability were significantly poorer in the group with tune deafness

  2. Streaming Velocities and the Baryon Acoustic Oscillation Scale.

    Science.gov (United States)

    Blazek, Jonathan A; McEwen, Joseph E; Hirata, Christopher M

    2016-03-25

    At the epoch of decoupling, cosmic baryons had supersonic velocities relative to the dark matter that were coherent on large scales. These velocities subsequently slow the growth of small-scale structure and, via feedback processes, can influence the formation of larger galaxies. We examine the effect of streaming velocities on the galaxy correlation function, including all leading-order contributions for the first time. We find that the impact on the baryon acoustic oscillation (BAO) peak is dramatically enhanced (by a factor of ∼5) over the results of previous investigations, with the primary new effect due to advection: if a galaxy retains memory of the primordial streaming velocity, it does so at its Lagrangian, rather than Eulerian, position. Since correlations in the streaming velocity change rapidly at the BAO scale, this advection term can cause a significant shift in the observed BAO position. If streaming velocities impact tracer density at the 1% level, compared to the linear bias, the recovered BAO scale is shifted by approximately 0.5%. This new effect, which is required to preserve Galilean invariance, greatly increases the importance of including streaming velocities in the analysis of upcoming BAO measurements and opens a new window to the astrophysics of galaxy formation.

  3. Effects of Auditory Stimuli on Visual Velocity Perception

    Directory of Open Access Journals (Sweden)

    Michiaki Shibata

    2011-10-01

    Full Text Available We investigated the effects of auditory stimuli on the perceived velocity of a moving visual stimulus. Previous studies have reported that the duration of visual events is perceived as being longer for events filled with auditory stimuli than for events not filled with auditory stimuli, ie, the so-called “filled-duration illusion.” In this study, we have shown that auditory stimuli also affect the perceived velocity of a moving visual stimulus. In Experiment 1, a moving comparison stimulus (4.2∼5.8 deg/s was presented together with filled (or unfilled white-noise bursts or with no sound. The standard stimulus was a moving visual stimulus (5 deg/s presented before or after the comparison stimulus. The participants had to judge which stimulus was moving faster. The results showed that the perceived velocity in the auditory-filled condition was lower than that in the auditory-unfilled and no-sound conditions. In Experiment 2, we investigated the effects of auditory stimuli on velocity adaptation. The results showed that the effects of velocity adaptation in the auditory-filled condition were weaker than those in the no-sound condition. These results indicate that auditory stimuli tend to decrease the perceived velocity of a moving visual stimulus.

  4. Changes in the Adult Vertebrate Auditory Sensory Epithelium After Trauma

    Science.gov (United States)

    Oesterle, Elizabeth C.

    2012-01-01

    Auditory hair cells transduce sound vibrations into membrane potential changes, ultimately leading to changes in neuronal firing and sound perception. This review provides an overview of the characteristics and repair capabilities of traumatized auditory sensory epithelium in the adult vertebrate ear. Injured mammalian auditory epithelium repairs itself by forming permanent scars but is unable to regenerate replacement hair cells. In contrast, injured non-mammalian vertebrate ear generates replacement hair cells to restore hearing functions. Non-sensory support cells within the auditory epithelium play key roles in the repair processes. PMID:23178236

  5. Visual form predictions facilitate auditory processing at the N1.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2017-02-20

    Auditory-visual (AV) events often involve a leading visual cue (e.g. auditory-visual speech) that allows the perceiver to generate predictions about the upcoming auditory event. Electrophysiological evidence suggests that when an auditory event is predicted, processing is sped up, i.e., the N1 component of the ERP occurs earlier (N1 facilitation). However, it is not clear (1) whether N1 facilitation is based specifically on predictive rather than multisensory integration and (2) which particular properties of the visual cue it is based on. The current experiment used artificial AV stimuli in which visual cues predicted but did not co-occur with auditory cues. Visual form cues (high and low salience) and the auditory-visual pairing were manipulated so that auditory predictions could be based on form and timing or on timing only. The results showed that N1 facilitation occurred only for combined form and temporal predictions. These results suggest that faster auditory processing (as indicated by N1 facilitation) is based on predictive processing generated by a visual cue that clearly predicts both what and when the auditory stimulus will occur. Copyright © 2016. Published by Elsevier Ltd.

  6. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  7. Neural Correlates of Automatic and Controlled Auditory Processing in Schizophrenia

    Science.gov (United States)

    Morey, Rajendra A.; Mitchell, Teresa V.; Inan, Seniha; Lieberman, Jeffrey A.; Belger, Aysenil

    2009-01-01

    Individuals with schizophrenia demonstrate impairments in selective attention and sensory processing. The authors assessed differences in brain function between 26 participants with schizophrenia and 17 comparison subjects engaged in automatic (unattended) and controlled (attended) auditory information processing using event-related functional MRI. Lower regional neural activation during automatic auditory processing in the schizophrenia group was not confined to just the temporal lobe, but also extended to prefrontal regions. Controlled auditory processing was associated with a distributed frontotemporal and subcortical dysfunction. Differences in activation between these two modes of auditory information processing were more pronounced in the comparison group than in the patient group. PMID:19196926

  8. THE DAWNING OF THE STREAM OF AQUARIUS IN RAVE

    International Nuclear Information System (INIS)

    Williams, M. E. K.; Steinmetz, M.; De Jong, R. S.; Minchev, I.; Sharma, S.; Bland-Hawthorn, J.; Parker, Q. A.; Seabroke, G. M.; Helmi, A.; Freeman, K. C.; Binney, J.; Bienayme, O.; Campbell, R.; Fulbright, J. P.; Gibson, B. K.; Gilmore, G. F.; Grebel, E. K.; Munari, U.; Navarro, J. F.; Reid, W.

    2011-01-01

    We identify a new, nearby (0.5kpc ∼ 0 0 and -70 0 0 , with heliocentric line-of-sight velocities V los ∼ -200 km s -1 . The members are outliers in the radial velocity distribution, and the overdensity is statistically significant when compared to mock samples created with both the Besancon Galaxy model and newly developed code Galaxia. The metallicity distribution function and isochrone fit in the log g-T eff plane suggest that the stream consists of a 10 Gyr old population with [M/H] ∼ -1.0. We explore relations to other streams and substructures, finding that the stream cannot be identified with known structures: it is a new, nearby substructure in the Galaxy's halo. Using a simple dynamical model of a dissolving satellite galaxy, we account for the localization of the stream. We find that the stream is dynamically young and therefore likely the debris of a recently disrupted dwarf galaxy or globular cluster. The Aquarius stream is thus a specimen of ongoing hierarchical Galaxy formation, rare for being right in the solar suburb.

  9. Spring 5 & reactive streams

    CERN Multimedia

    CERN. Geneva; Clozel, Brian

    2017-01-01

    Spring is a framework widely used by the world-wide Java community, and it is also extensively used at CERN. The accelerator control system is constituted of 10 million lines of Java code, spread across more than 1000 projects (jars) developed by 160 software engineers. Around half of this (all server-side Java code) is based on the Spring framework. Warning: the speakers will assume that people attending the seminar are familiar with Java and Spring’s basic concepts. Spring 5.0 and Spring Boot 2.0 updates (45 min) This talk will cover the big ticket items in the 5.0 release of Spring (including Kotlin support, @Nullable and JDK9) and provide an update on Spring Boot 2.0, which is scheduled for the end of the year. Reactive Spring (1h) Spring Framework 5.0 has been released - and it now supports reactive applications in the Spring ecosystem. During this presentation, we'll talk about the reactive foundations of Spring Framework with the Reactor project and the reactive streams specification. We'll al...

  10. Weak responses to auditory feedback perturbation during articulation in persons who stutter: evidence for abnormal auditory-motor transformation.

    Directory of Open Access Journals (Sweden)

    Shanqing Cai

    Full Text Available Previous empirical observations have led researchers to propose that auditory feedback (the auditory perception of self-produced sounds when speaking functions abnormally in the speech motor systems of persons who stutter (PWS. Researchers have theorized that an important neural basis of stuttering is the aberrant integration of auditory information into incipient speech motor commands. Because of the circumstantial support for these hypotheses and the differences and contradictions between them, there is a need for carefully designed experiments that directly examine auditory-motor integration during speech production in PWS. In the current study, we used real-time manipulation of auditory feedback to directly investigate whether the speech motor system of PWS utilizes auditory feedback abnormally during articulation and to characterize potential deficits of this auditory-motor integration. Twenty-one PWS and 18 fluent control participants were recruited. Using a short-latency formant-perturbation system, we examined participants' compensatory responses to unanticipated perturbation of auditory feedback of the first formant frequency during the production of the monophthong [ε]. The PWS showed compensatory responses that were qualitatively similar to the controls' and had close-to-normal latencies (∼150 ms, but the magnitudes of their responses were substantially and significantly smaller than those of the control participants (by 47% on average, p<0.05. Measurements of auditory acuity indicate that the weaker-than-normal compensatory responses in PWS were not attributable to a deficit in low-level auditory processing. These findings are consistent with the hypothesis that stuttering is associated with functional defects in the inverse models responsible for the transformation from the domain of auditory targets and auditory error information into the domain of speech motor commands.

  11. Organization of the auditory brainstem in a lizard, Gekko gecko. I. Auditory nerve, cochlear nuclei, and superior olivary nuclei

    DEFF Research Database (Denmark)

    Tang, Y. Z.; Christensen-Dalsgaard, J.; Carr, C. E.

    2012-01-01

    We used tract tracing to reveal the connections of the auditory brainstem in the Tokay gecko (Gekko gecko). The auditory nerve has two divisions, a rostroventrally directed projection of mid- to high best-frequency fibers to the nucleus angularis (NA) and a more dorsal and caudal projection of lo...... of auditory connections in lizards and archosaurs but also different processing of low- and high-frequency information in the brainstem. J. Comp. Neurol. 520:17841799, 2012. (C) 2011 Wiley Periodicals, Inc...

  12. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    OpenAIRE

    Zahra Shahidipour; Ahmad Geshani; Zahra Jafari; Shohreh Jalaie; Elham Khosravifard

    2014-01-01

    Background and Aim: Memory is one of the aspects of cognitive function which is widely affected among aged people. Since aging has different effects on different memorial systems and little studies have investigated auditory-verbal memory function in older adults using dichotic listening techniques, the purpose of this study was to evaluate the auditory-verbal memory function among old people using Persian version of dichotic auditory-verbal memory test. Methods: The Persian version of dic...

  13. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  14. Waste streams from reprocessing operations

    International Nuclear Information System (INIS)

    Andersson, B.; Ericsson, A.-M.

    1978-03-01

    The three main products from reprocessing operations are uranium, plutonium and vitrified high-level-waste. The purpose of this report is to identify and quantify additional waste streams containing radioactive isotops. Special emphasis is laid on Sr, Cs and the actinides. The main part, more than 99 % of both the fission-products and the transuranic elements are contained in the HLW-stream. Small quantities sometimes contaminate the U- and Pu-streams and the rest is found in the medium-level-waste

  15. Formation of the First Stars and Blackholes

    Science.gov (United States)

    Yoshida, Naoki

    2018-05-01

    Cosmic reionization is thought to be initiated by the first generation of stars and blackholes. We review recent progress in theoretical studies of early structure formation. Cosmic structure formation is driven by gravitational instability of primeval density fluctuations left over from Big Bang. At early epochs, there are baryonic streaming motions with significant relative velocity with respect to dark matter. The formation of primordial gas clouds is typically delayed by the streaming motions, but then physical conditions for the so-called direct collapse blackhole formation are realized in proto-galactic halos. We present a promising model in which intermediate mass blackholes are formed as early as z = 30.

  16. Visual cortex and auditory cortex activation in early binocularly blind macaques: A BOLD-fMRI study using auditory stimuli.

    Science.gov (United States)

    Wang, Rong; Wu, Lingjie; Tang, Zuohua; Sun, Xinghuai; Feng, Xiaoyuan; Tang, Weijun; Qian, Wen; Wang, Jie; Jin, Lixin; Zhong, Yufeng; Xiao, Zebin

    2017-04-15

    Cross-modal plasticity within the visual and auditory cortices of early binocularly blind macaques is not well studied. In this study, four healthy neonatal macaques were assigned to group A (control group) or group B (binocularly blind group). Sixteen months later, blood oxygenation level-dependent functional imaging (BOLD-fMRI) was conducted to examine the activation in the visual and auditory cortices of each macaque while being tested using pure tones as auditory stimuli. The changes in the BOLD response in the visual and auditory cortices of all macaques were compared with immunofluorescence staining findings. Compared with group A, greater BOLD activity was observed in the bilateral visual cortices of group B, and this effect was particularly obvious in the right visual cortex. In addition, more activated volumes were found in the bilateral auditory cortices of group B than of group A, especially in the right auditory cortex. These findings were consistent with the fact that there were more c-Fos-positive cells in the bilateral visual and auditory cortices of group B compared with group A (p visual cortices of binocularly blind macaques can be reorganized to process auditory stimuli after visual deprivation, and this effect is more obvious in the right than the left visual cortex. These results indicate the establishment of cross-modal plasticity within the visual and auditory cortices. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Report on the ESO Workshop ''Satellites and Streams in Santiago''

    Science.gov (United States)

    Küpper, A. H. W.; Mieske, S.

    2015-09-01

    Galactic satellites and tidal streams are arguably the two most direct imprints of hierarchical structure formation in the haloes of galaxies. At this ESO workshop we sought to create the big picture of the galactic accretion process, and shed light on the interplay between satellites and streams in the Milky Way, Andromeda and beyond. The Scientific Organising Committee prepared a well-balanced programme with 60 talks and 30 poster contributions, resulting in a meeting which was greatly enjoyed by the more than 110 participants at the venue, and worldwide via Twitter (#SSS15).

  18. The self or the voice? Relative contributions of self-esteem and voice appraisal in persistent auditory hallucinations.

    Science.gov (United States)

    Fannon, Dominic; Hayward, Peter; Thompson, Neil; Green, Nicola; Surguladze, Simon; Wykes, Til

    2009-07-01

    Persistent auditory hallucinations are common, disabling and difficult to treat. Cognitive behavioural therapy is recommended in their treatment though there is limited empirical evidence of the role of cognitive factors in the formation and persistence of voices. Low self-esteem is thought to play a causal and maintaining role in a range of clinical disorders, particularly depression, which is prevalent and disabling in schizophrenia. It was hypothesized that low self-esteem is prominent in, and contributes to, depression in voice hearers. Beliefs about persistent auditory hallucinations were investigated in 82 patients using the Beliefs About Voices Questionnaire--revised in a cross-sectional design. Self-esteem and depression were assessed using standardized measures. Depression and low self-esteem were prominent as were beliefs about the omnipotence and malevolence of auditory hallucinations. Beliefs about the uncontrollability and dominance of auditory hallucinations and low self-esteem were significantly correlated with depression. Low self-esteem did not mediate the effect of beliefs about auditory hallucinations--both acted independently to contribute to depression in this sample of patients with schizophrenia and persistent auditory hallucinations. Low self-esteem is of fundamental importance to the understanding of affective disturbance in voice hearers. Therapeutic interventions need to address both the appraisal of self and hallucinations in schizophrenia. Measures which ameliorate low self-esteem can be expected to improve depressed mood in this patient group. Further elucidation of the mechanisms involved can strengthen existing models of positive psychotic symptoms and provide targets for more effective treatments.

  19. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats...... and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled...... along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored...

  20. Aging increases distraction by auditory oddballs in visual, but not auditory tasks.

    Science.gov (United States)

    Leiva, Alicia; Parmentier, Fabrice B R; Andrés, Pilar

    2015-05-01

    Aging is typically considered to bring a reduction of the ability to resist distraction by task-irrelevant stimuli. Yet recent work suggests that this conclusion must be qualified and that the effect of aging is mitigated by whether irrelevant and target stimuli emanate from the same modalities or from distinct ones. Some studies suggest that aging is especially sensitive to distraction within-modality while others suggest it is greater across modalities. Here we report the first study to measure the effect of aging on deviance distraction in cross-modal (auditory-visual) and uni-modal (auditory-auditory) oddball tasks. Young and older adults were asked to judge the parity of target digits (auditory or visual in distinct blocks of trials), each preceded by a task-irrelevant sound (the same tone on most trials-the standard sound-or, on rare and unpredictable trials, a burst of white noise-the deviant sound). Deviant sounds yielded distraction (longer response times relative to standard sounds) in both tasks and age groups. However, an age-related increase in distraction was observed in the cross-modal task and not in the uni-modal task. We argue that aging might affect processes involved in the switching of attention across modalities and speculate that this may due to the slowing of this type of attentional shift or a reduction in cognitive control required to re-orient attention toward the target's modality.

  1. ATLAS Live: Collaborative Information Streams

    Energy Technology Data Exchange (ETDEWEB)

    Goldfarb, Steven [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States); Collaboration: ATLAS Collaboration

    2011-12-23

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  2. ATLAS Live: Collaborative Information Streams

    International Nuclear Information System (INIS)

    Goldfarb, Steven

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at the web portals, using the CERN SSO system.

  3. ATLAS Live: Collaborative Information Streams

    CERN Document Server

    Goldfarb, S; The ATLAS collaboration

    2011-01-01

    I report on a pilot project launched in 2010 focusing on facilitating communication and information exchange within the ATLAS Collaboration, through the combination of digital signage software and webcasting. The project, called ATLAS Live, implements video streams of information, ranging from detailed detector and data status to educational and outreach material. The content, including text, images, video and audio, is collected, visualised and scheduled using digital signage software. The system is robust and flexible, utilizing scripts to input data from remote sources, such as the CERN Document Server, Indico, or any available URL, and to integrate these sources into professional-quality streams, including text scrolling, transition effects, inter and intra-screen divisibility. Information is published via the encoding and webcasting of standard video streams, viewable on all common platforms, using a web browser or other common video tool. Authorisation is enforced at the level of the streaming and at th...

  4. STREAMS - Technology Programme. Yearbook 2003

    International Nuclear Information System (INIS)

    2003-01-01

    The STREAMS Technology Programme addresses municipal waste. Municipal waste is composed of waste from households and small businesses. The programme focuses on five areas Waste prevention, Collection, transportation, and management of waste streams, Waste treatment technologies, Waste recycling into raw materials and new products, Landfill technologies. The development projects of the STREAMS Programme utilize a number of different technologies, such as biotechnology, information technology, materials technology, measurement and analysis, and automation technology. Finnish expertise in materials recycling technologies and related electronics and information technology is extremely high on a worldwide scale even though the companies represent SMEs. Started in 2001, the STREAMS programme has a total volume of 27 million euros, half of which is funded by Tekes. The programme runs through the end of 2004. (author)

  5. Cellular Subcompartments through Cytoplasmic Streaming.

    Science.gov (United States)

    Pieuchot, Laurent; Lai, Julian; Loh, Rachel Ann; Leong, Fong Yew; Chiam, Keng-Hwee; Stajich, Jason; Jedd, Gregory

    2015-08-24

    Cytoplasmic streaming occurs in diverse cell types, where it generally serves a transport function. Here, we examine streaming in multicellular fungal hyphae and identify an additional function wherein regimented streaming forms distinct cytoplasmic subcompartments. In the hypha, cytoplasm flows directionally from cell to cell through septal pores. Using live-cell imaging and computer simulations, we identify a flow pattern that produces vortices (eddies) on the upstream side of the septum. Nuclei can be immobilized in these microfluidic eddies, where they form multinucleate aggregates and accumulate foci of the HDA-2 histone deacetylase-associated factor, SPA-19. Pores experiencing flow degenerate in the absence of SPA-19, suggesting that eddy-trapped nuclei function to reinforce the septum. Together, our data show that eddies comprise a subcellular niche favoring nuclear differentiation and that subcompartments can be self-organized as a consequence of regimented cytoplasmic streaming. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. On-stream analysis systems

    International Nuclear Information System (INIS)

    Howarth, W.J.; Watt, J.S.

    1982-01-01

    An outline of some commercially available on-stream analysis systems in given. Systems based on x-ray tube/crystal spectrometers, scintillation detectors, proportional detectors and solid-state detectors are discussed

  7. nitrogen saturation in stream ecosystems

    OpenAIRE

    Earl, S. R.; Valett, H. M.; Webster, J. R.

    2006-01-01

    The concept of nitrogen (N) saturation has organized the assessment of N loading in terrestrial ecosystems. Here we extend the concept to lotic ecosystems by coupling Michaelis-Menten kinetics and nutrient spiraling. We propose a series of saturation response types, which may be used to characterize the proximity of streams to N saturation. We conducted a series of short-term N releases using a tracer ((NO3)-N-15-N) to measure uptake. Experiments were conducted in streams spanning a gradient ...

  8. Time computations in anuran auditory systems

    Directory of Open Access Journals (Sweden)

    Gary J Rose

    2014-05-01

    Full Text Available Temporal computations are important in the acoustic communication of anurans. In many cases, calls between closely related species are nearly identical spectrally but differ markedly in temporal structure. Depending on the species, calls can differ in pulse duration, shape and/or rate (i.e., amplitude modulation, direction and rate of frequency modulation, and overall call duration. Also, behavioral studies have shown that anurans are able to discriminate between calls that differ in temporal structure. In the peripheral auditory system, temporal information is coded primarily in the spatiotemporal patterns of activity of auditory-nerve fibers. However, major transformations in the representation of temporal information occur in the central auditory system. In this review I summarize recent advances in understanding how temporal information is represented in the anuran midbrain, with particular emphasis on mechanisms that underlie selectivity for pulse duration and pulse rate (i.e., intervals between onsets of successive pulses. Two types of neurons have been identified that show selectivity for pulse rate: long-interval cells respond well to slow pulse rates but fail to spike or respond phasically to fast pulse rates; conversely, interval-counting neurons respond to intermediate or fast pulse rates, but only after a threshold number of pulses, presented at optimal intervals, have occurred. Duration-selectivity is manifest as short-pass, band-pass or long-pass tuning. Whole-cell patch recordings, in vivo, suggest that excitation and inhibition are integrated in diverse ways to generate temporal selectivity. In many cases, activity-related enhancement or depression of excitatory or inhibitory processes appear to contribute to selective responses.

  9. The Northeast Stream Quality Assessment

    Science.gov (United States)

    Van Metre, Peter C.; Riva-Murray, Karen; Coles, James F.

    2016-04-22

    In 2016, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) is assessing stream quality in the northeastern United States. The goal of the Northeast Stream Quality Assessment (NESQA) is to assess the quality of streams in the region by characterizing multiple water-quality factors that are stressors to aquatic life and evaluating the relation between these stressors and biological communities. The focus of NESQA in 2016 will be on the effects of urbanization and agriculture on stream quality in all or parts of eight states: Connecticut, Massachusetts, New Hampshire, New Jersey, New York, Pennsylvania, Rhode Island, and Vermont.Findings will provide the public and policymakers with information about the most critical factors affecting stream quality, thus providing insights about possible approaches to protect the health of streams in the region. The NESQA study will be the fourth regional study conducted as part of NAWQA and will be of similar design and scope to the first three, in the Midwest in 2013, the Southeast in 2014, and the Pacific Northwest in 2015 (http://txpub.usgs.gov/RSQA/).

  10. The representation of order information in auditory-verbal short-term memory.

    Science.gov (United States)

    Kalm, Kristjan; Norris, Dennis

    2014-05-14

    Here we investigate how order information is represented in auditory-verbal short-term memory (STM). We used fMRI and a serial recall task to dissociate neural activity patterns representing the phonological properties of the items stored in STM from the patterns representing their order. For this purpose, we analyzed fMRI activity patterns elicited by different item sets and different orderings of those items. These fMRI activity patterns were compared with the predictions made by positional and chaining models of serial order. The positional models encode associations between items and their positions in a sequence, whereas the chaining models encode associations between successive items and retain no position information. We show that a set of brain areas in the postero-dorsal stream of auditory processing store associations between items and order as predicted by a positional model. The chaining model of order representation generates a different pattern similarity prediction, which was shown to be inconsistent with the fMRI data. Our results thus favor a neural model of order representation that stores item codes, position codes, and the mapping between them. This study provides the first fMRI evidence for a specific model of order representation in the human brain. Copyright © 2014 the authors 0270-6474/14/346879-08$15.00/0.

  11. Bottom-up influences of voice continuity in focusing selective auditory attention.

    Science.gov (United States)

    Bressler, Scott; Masud, Salwa; Bharadwaj, Hari; Shinn-Cunningham, Barbara

    2014-01-01

    Selective auditory attention causes a relative enhancement of the neural representation of important information and suppression of the neural representation of distracting sound, which enables a listener to analyze and interpret information of interest. Some studies suggest that in both vision and in audition, the "unit" on which attention operates is an object: an estimate of the information coming from a particular external source out in the world. In this view, which object ends up in the attentional foreground depends on the interplay of top-down, volitional attention and stimulus-driven, involuntary attention. Here, we test the idea that auditory attention is object based by exploring whether continuity of a non-spatial feature (talker identity, a feature that helps acoustic elements bind into one perceptual object) also influences selective attention performance. In Experiment 1, we show that perceptual continuity of target talker voice helps listeners report a sequence of spoken target digits embedded in competing reversed digits spoken by different talkers. In Experiment 2, we provide evidence that this benefit of voice continuity is obligatory and automatic, as if voice continuity biases listeners by making it easier to focus on a subsequent target digit when it is perceptually linked to what was already in the attentional foreground. Our results support the idea that feature continuity enhances streaming automatically, thereby influencing the dynamic processes that allow listeners to successfully attend to objects through time in the cacophony that assails our ears in many everyday settings.

  12. Auditory sensory ("echoic") memory dysfunction in schizophrenia.

    Science.gov (United States)

    Strous, R D; Cowan, N; Ritter, W; Javitt, D C

    1995-10-01

    Studies of working memory dysfunction in schizophrenia have focused largely on prefrontal components. This study investigated the integrity of auditory sensory ("echoic") memory, a component that shows little dependence on prefrontal functioning. Echoic memory was investigated in 20 schizophrenic subjects and 20 age- and IQ-matched normal comparison subjects with the use of nondelayed and delayed tone matching. Schizophrenic subjects were markedly impaired in their ability to match two tones after an extremely brief delay between them (300 msec) but were unimpaired when there was no delay between tones. Working memory dysfunction in schizophrenia affects brain regions outside the prefrontal cortex as well as within.

  13. Multiprofessional committee on auditory health: COMUSA.

    Science.gov (United States)

    Lewis, Doris Ruthy; Marone, Silvio Antonio Monteiro; Mendes, Beatriz C A; Cruz, Oswaldo Laercio Mendonça; Nóbrega, Manoel de

    2010-01-01

    Created in 2007, COMUSA is a multiprofessional committee comprising speech therapy, otology, otorhinolaryngology and pediatrics with the aim of debating and countersigning auditory health actions for neonatal, lactating, preschool and school children, adolescents, adults and elderly persons. COMUSA includes representatives of the Brazilian Audiology Academy (Academia Brasileira de Audiologia or ABA), the Brazilian Otorhinolaryngology and Cervicofacial Surgery Association (Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico Facial or ABORL), the Brazilian Phonoaudiology Society (Sociedade Brasileira de Fonoaudiologia or SBFa), the Brazilian Otology Society (Sociedade Brasileira de Otologia or SBO), and the Brazilian Pediatrics Society (Sociedade Brasileira de Pediatria or SBP).

  14. Left auditory cortex gamma synchronization and auditory hallucination symptoms in schizophrenia

    Directory of Open Access Journals (Sweden)

    Shenton Martha E

    2009-07-01

    Full Text Available Abstract Background Oscillatory electroencephalogram (EEG abnormalities may reflect neural circuit dysfunction in neuropsychiatric disorders. Previously we have found positive correlations between the phase synchronization of beta and gamma oscillations and hallucination symptoms in schizophrenia patients. These findings suggest that the propensity for hallucinations is associated with an increased tendency for neural circuits in sensory cortex to enter states of oscillatory synchrony. Here we tested this hypothesis by examining whether the 40 Hz auditory steady-state response (ASSR generated in the left primary auditory cortex is positively correlated with auditory hallucination symptoms in schizophrenia. We also examined whether the 40 Hz ASSR deficit in schizophrenia was associated with cross-frequency interactions. Sixteen healthy control subjects (HC and 18 chronic schizophrenia patients (SZ listened to 40 Hz binaural click trains. The EEG was recorded from 60 electrodes and average-referenced offline. A 5-dipole model was fit from the HC grand average ASSR, with 2 pairs of superior temporal dipoles and a deep midline dipole. Time-frequency decomposition was performed on the scalp EEG and source data. Results Phase locking factor (PLF and evoked power were reduced in SZ at fronto-central electrodes, replicating prior findings. PLF was reduced in SZ for non-homologous right and left hemisphere sources. Left hemisphere source PLF in SZ was positively correlated with auditory hallucination symptoms, and was modulated by delta phase. Furthermore, the correlations between source evoked power and PLF found in HC was reduced in SZ for the LH sources. Conclusion These findings suggest that differential neural circuit abnormalities may be present in the left and right auditory cortices in schizophrenia. In addition, they provide further support for the hypothesis that hallucinations are related to cortical hyperexcitability, which is manifested by

  15. [Communication and auditory behavior obtained by auditory evoked potentials in mammals, birds, amphibians, and reptiles].

    Science.gov (United States)

    Arch-Tirado, Emilio; Collado-Corona, Miguel Angel; Morales-Martínez, José de Jesús

    2004-01-01

    amphibians, Frog catesbiana (frog bull, 30 animals); reptiles, Sceloporus torcuatus (common small lizard, 22 animals); birds: Columba livia (common dove, 20 animals), and mammals, Cavia porcellus, (guinea pig, 20 animals). With regard to lodging, all animals were maintained at the Institute of Human Communication Disorders, were fed with special food for each species, and had water available ad libitum. Regarding procedure, for carrying out analysis of auditory evoked potentials of brain stem SPL amphibians, birds, and mammals were anesthetized with ketamine 20, 25, and 50 mg/kg, by injection. Reptiles were anesthetized by freezing (6 degrees C). Study subjects had needle electrodes placed in an imaginary line on the half sagittal line between both ears and eyes, behind right ear, and behind left ear. Stimulation was carried out inside a no noise site by means of a horn in free field. The sign was filtered at between 100 and 3,000 Hz and analyzed in a computer for provoked potentials (Racia APE 78). In data shown by amphibians, wave-evoked responses showed greater latency than those of the other species. In reptiles, latency was observed as reduced in comparison with amphibians. In the case of birds, lesser latency values were observed, while in the case of guinea pigs latencies were greater than those of doves but they were stimulated by 10 dB, which demonstrated best auditory threshold in the four studied species. Last, it was corroborated that as the auditory threshold of each species it descends conforms to it advances in the phylogenetic scale. Beginning with these registrations, we care able to say that response for evoked brain stem potential showed to be more complex and lesser values of absolute latency as we advance along the phylogenetic scale; thus, the opposing auditory threshold is better agreement with regard to the phylogenetic scale among studied species. These data indicated to us that seeking of auditory information is more complex in more

  16. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Science.gov (United States)

    Młynarski, Wiktor

    2014-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficient coding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform—Independent Component Analysis (ICA) trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment. PMID:24639644

  17. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    Directory of Open Access Journals (Sweden)

    Wiktor eMlynarski

    2014-03-01

    Full Text Available To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes for natural binaural sounds. Firstly, it is demonstrated that a linear efficient coding transform - Independent Component Analysis (ICA trained on spectrograms of naturalistic simulated binaural sounds extracts spatial information present in the signal. A simple hierarchical ICA extension allowing for decoding of sound position is proposed. Furthermore, it is shown that units revealing spatial selectivity can be learned from a binaural recording of a natural auditory scene. In both cases a relatively small subpopulation of learned spectrogram features suffices to perform accurate sound localization. Representation of the auditory space is therefore learned in a purely unsupervised way by maximizing the coding efficiency and without any task-specific constraints. This results imply that efficient coding is a useful strategy for learning structures which allow for making behaviorally vital inferences about the environment.

  18. Auditory interfaces in automated driving: an international survey

    NARCIS (Netherlands)

    Bazilinskyy, P.; de Winter, J.C.F.

    2015-01-01

    This study investigated peoples’ opinion on auditory interfaces in contemporary
    cars and their willingness to be exposed to auditory feedback in automated driving. We used an Internet-based survey to collect 1,205 responses from 91 countries. The respondents stated their attitudes towards two

  19. Prevalence and correlates of auditory vocal hallucinations in middle childhood

    NARCIS (Netherlands)

    Bartels-Velthuis, A.A.; Jenner, J.A.; van de Willige, G.; van Os, J.; Wiersma, D.

    Background Hearing voices occurs in middle childhood, but little is known about prevalence, aetiology and immediate consequences. Aims To investigate prevalence, developmental risk factors and behavioural correlates of auditory vocal hallucinations in 7- and 8-year-olds. Method Auditory vocal

  20. Visual and Auditory Input in Second-Language Speech Processing

    Science.gov (United States)

    Hardison, Debra M.

    2010-01-01

    The majority of studies in second-language (L2) speech processing have involved unimodal (i.e., auditory) input; however, in many instances, speech communication involves both visual and auditory sources of information. Some researchers have argued that multimodal speech is the primary mode of speech perception (e.g., Rosenblum 2005). Research on…