WorldWideScience

Sample records for auditory stream formation

  1. The role of temporal coherence in auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt

    The ability to perceptually segregate concurrent sound sources and focus one’s attention on a single source at a time is essential for the ability to use acoustic information. While perceptual experiments have determined a range of acoustic cues that help facilitate auditory stream segregation......, it is not clear how the auditory system realizes the task. This thesis presents a study of the mechanisms involved in auditory stream segregation. Through a combination of psychoacoustic experiments, designed to characterize the influence of acoustic cues on auditory stream formation, and computational models...... of auditory processing, the role of auditory preprocessing and temporal coherence in auditory stream formation was evaluated. The computational model presented in this study assumes that auditory stream segregation occurs when sounds stimulate non-overlapping neural populations in a temporally incoherent...

  2. Effects of Context on Auditory Stream Segregation

    Science.gov (United States)

    Snyder, Joel S.; Carter, Olivia L.; Lee, Suh-Kyung; Hannon, Erin E.; Alain, Claude

    2008-01-01

    The authors examined the effect of preceding context on auditory stream segregation. Low tones (A), high tones (B), and silences (-) were presented in an ABA-pattern. Participants indicated whether they perceived 1 or 2 streams of tones. The A tone frequency was fixed, and the B tone was the same as the A tone or had 1 of 3 higher frequencies.…

  3. The effects of rhythm and melody on auditory stream segregation.

    Science.gov (United States)

    Szalárdy, Orsolya; Bendixen, Alexandra; Böhm, Tamás M; Davies, Lucy A; Denham, Susan L; Winkler, István

    2014-03-01

    While many studies have assessed the efficacy of similarity-based cues for auditory stream segregation, much less is known about whether and how the larger-scale structure of sound sequences support stream formation and the choice of sound organization. Two experiments investigated the effects of musical melody and rhythm on the segregation of two interleaved tone sequences. The two sets of tones fully overlapped in pitch range but differed from each other in interaural time and intensity. Unbeknownst to the listener, separately, each of the interleaved sequences was created from the notes of a different song. In different experimental conditions, the notes and/or their timing could either follow those of the songs or they could be scrambled or, in case of timing, set to be isochronous. Listeners were asked to continuously report whether they heard a single coherent sequence (integrated) or two concurrent streams (segregated). Although temporal overlap between tones from the two streams proved to be the strongest cue for stream segregation, significant effects of tonality and familiarity with the songs were also observed. These results suggest that the regular temporal patterns are utilized as cues in auditory stream segregation and that long-term memory is involved in this process.

  4. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  5. Auditory Streaming as an Online Classification Process with Evidence Accumulation.

    Directory of Open Access Journals (Sweden)

    Dana Barniv

    Full Text Available When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones, or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams". Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named "auditory streaming". Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally.

  6. Auditory Streaming as an Online Classification Process with Evidence Accumulation

    Science.gov (United States)

    Barniv, Dana; Nelken, Israel

    2015-01-01

    When human subjects hear a sequence of two alternating pure tones, they often perceive it in one of two ways: as one integrated sequence (a single "stream" consisting of the two tones), or as two segregated sequences, one sequence of low tones perceived separately from another sequence of high tones (two "streams"). Perception of this stimulus is thus bistable. Moreover, subjects report on-going switching between the two percepts: unless the frequency separation is large, initial perception tends to be of integration, followed by toggling between integration and segregation phases. The process of stream formation is loosely named “auditory streaming”. Auditory streaming is believed to be a manifestation of human ability to analyze an auditory scene, i.e. to attribute portions of the incoming sound sequence to distinct sound generating entities. Previous studies suggested that the durations of the successive integration and segregation phases are statistically independent. This independence plays an important role in current models of bistability. Contrary to this, we show here, by analyzing a large set of data, that subsequent phase durations are positively correlated. To account together for bistability and positive correlation between subsequent durations, we suggest that streaming is a consequence of an evidence accumulation process. Evidence for segregation is accumulated during the integration phase and vice versa; a switch to the opposite percept occurs stochastically based on this evidence. During a long phase, a large amount of evidence for the opposite percept is accumulated, resulting in a long subsequent phase. In contrast, a short phase is followed by another short phase. We implement these concepts using a probabilistic model that shows both bistability and correlations similar to those observed experimentally. PMID:26671774

  7. Effects of sequential streaming on auditory masking using psychoacoustics and auditory evoked potentials.

    Science.gov (United States)

    Verhey, Jesko L; Ernst, Stephan M A; Yasin, Ifat

    2012-03-01

    The present study was aimed at investigating the relationship between the mismatch negativity (MMN) and psychoacoustical effects of sequential streaming on comodulation masking release (CMR). The influence of sequential streaming on CMR was investigated using a psychoacoustical alternative forced-choice procedure and electroencephalography (EEG) for the same group of subjects. The psychoacoustical data showed, that adding precursors comprising of only off-signal-frequency maskers abolished the CMR. Complementary EEG data showed an MMN irrespective of the masker envelope correlation across frequency when only the off-signal-frequency masker components were present. The addition of such precursors promotes a separation of the on- and off-frequency masker components into distinct auditory objects preventing the auditory system from using comodulation as an additional cue. A frequency-specific adaptation changing the representation of the flanking bands in the streaming conditions may also contribute to the reduction of CMR in the stream conditions, however, it is unlikely that adaptation is the primary reason for the streaming effect. A neurophysiological correlate of sequential streaming was found in EEG data using MMN, but the magnitude of the MMN was not correlated with the audibility of the signal in CMR experiments. Dipole source analysis indicated different cortical regions involved in processing auditory streaming and modulation detection. In particular, neural sources for processing auditory streaming include cortical regions involved in decision-making. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Auditory stream segregation in children with Asperger syndrome

    Science.gov (United States)

    Lepistö, T.; Kuitunen, A.; Sussman, E.; Saalasti, S.; Jansson-Verkasalo, E.; Nieminen-von Wendt, T.; Kujala, T.

    2009-01-01

    Individuals with Asperger syndrome (AS) often have difficulties in perceiving speech in noisy environments. The present study investigated whether this might be explained by deficient auditory stream segregation ability, that is, by a more basic difficulty in separating simultaneous sound sources from each other. To this end, auditory event-related brain potentials were recorded from a group of school-aged children with AS and a group of age-matched controls using a paradigm specifically developed for studying stream segregation. Differences in the amplitudes of ERP components were found between groups only in the stream segregation conditions and not for simple feature discrimination. The results indicated that children with AS have difficulties in segregating concurrent sound streams, which ultimately may contribute to the difficulties in speech-in-noise perception. PMID:19751798

  9. Interaction of streaming and attention in human auditory cortex.

    Science.gov (United States)

    Gutschalk, Alexander; Rupp, André; Dykstra, Andrew R

    2015-01-01

    Serially presented tones are sometimes segregated into two perceptually distinct streams. An ongoing debate is whether this basic streaming phenomenon reflects automatic processes or requires attention focused to the stimuli. Here, we examined the influence of focused attention on streaming-related activity in human auditory cortex using magnetoencephalography (MEG). Listeners were presented with a dichotic paradigm in which left-ear stimuli consisted of canonical streaming stimuli (ABA_ or ABAA) and right-ear stimuli consisted of a classical oddball paradigm. In phase one, listeners were instructed to attend the right-ear oddball sequence and detect rare deviants. In phase two, they were instructed to attend the left ear streaming stimulus and report whether they heard one or two streams. The frequency difference (ΔF) of the sequences was set such that the smallest and largest ΔF conditions generally induced one- and two-stream percepts, respectively. Two intermediate ΔF conditions were chosen to elicit bistable percepts (i.e., either one or two streams). Attention enhanced the peak-to-peak amplitude of the P1-N1 complex, but only for ambiguous ΔF conditions, consistent with the notion that automatic mechanisms for streaming tightly interact with attention and that the latter is of particular importance for ambiguous sound sequences.

  10. Time course of auditory streaming: Do CI users differ from normal-hearing listeners?

    Directory of Open Access Journals (Sweden)

    Martin eBöckmann-Barthel

    2014-07-01

    Full Text Available In a complex acoustical environment with multiple sound sources the auditory system uses streaming as a tool to organize the incoming sounds in one or more streams depending on the stimulus parameters. Streaming is commonly studied by alternating sequences of signals. These are often tones with different frequencies. The present study investigates stream segregation in cochlear implant (CI users, where hearing is restored by electrical stimulation of the auditory nerve. CI users listened to 30-s long sequences of alternating A and B harmonic complexes at four different fundamental frequency separations, ranging from 2 to 14 semitones. They had to indicate as promptly as possible after sequence onset, if they perceived one stream or two streams and, in addition, any changes of the percept throughout the rest of the sequence. The conventional view is that the initial percept is always that of a single stream which may after some time change to a percept of two streams. This general build-up hypothesis has recently been challenged on the basis of a new analysis of data of normal-hearing listeners which showed a build-up response only for an intermediate frequency separation. Using the same experimental paradigm and analysis, the present study found that the results of CI users agree with those of the normal-hearing listeners: (i the probability of the first decision to be a one-stream percept decreased and that of a two-stream percept increased as Δf increased, and (ii a build-up was only found for 6 semitones. Only the time elapsed before the listeners made their first decision of the percept was prolonged as compared to normal-hearing listeners. The similarity in the data of the CI user and the normal-hearing listeners indicates that the quality of stream formation is similar in these groups of listeners.

  11. Neural dynamics of phonological processing in the dorsal auditory stream.

    Science.gov (United States)

    Liebenthal, Einat; Sabri, Merav; Beardsley, Scott A; Mangalathu-Arumana, Jain; Desai, Anjali

    2013-09-25

    Neuroanatomical models hypothesize a role for the dorsal auditory pathway in phonological processing as a feedforward efferent system (Davis and Johnsrude, 2007; Rauschecker and Scott, 2009; Hickok et al., 2011). But the functional organization of the pathway, in terms of time course of interactions between auditory, somatosensory, and motor regions, and the hemispheric lateralization pattern is largely unknown. Here, ambiguous duplex syllables, with elements presented dichotically at varying interaural asynchronies, were used to parametrically modulate phonological processing and associated neural activity in the human dorsal auditory stream. Subjects performed syllable and chirp identification tasks, while event-related potentials and functional magnetic resonance images were concurrently collected. Joint independent component analysis was applied to fuse the neuroimaging data and study the neural dynamics of brain regions involved in phonological processing with high spatiotemporal resolution. Results revealed a highly interactive neural network associated with phonological processing, composed of functional fields in posterior temporal gyrus (pSTG), inferior parietal lobule (IPL), and ventral central sulcus (vCS) that were engaged early and almost simultaneously (at 80-100 ms), consistent with a direct influence of articulatory somatomotor areas on phonemic perception. Left hemispheric lateralization was observed 250 ms earlier in IPL and vCS than pSTG, suggesting that functional specialization of somatomotor (and not auditory) areas determined lateralization in the dorsal auditory pathway. The temporal dynamics of the dorsal auditory pathway described here offer a new understanding of its functional organization and demonstrate that temporal information is essential to resolve neural circuits underlying complex behaviors.

  12. A Detection-Theoretic Analysis of Auditory Streaming and Its Relation to Auditory Masking

    Directory of Open Access Journals (Sweden)

    An-Chieh Chang

    2016-09-01

    Full Text Available Research on hearing has long been challenged with understanding our exceptional ability to hear out individual sounds in a mixture (the so-called cocktail party problem. Two general approaches to the problem have been taken using sequences of tones as stimuli. The first has focused on our tendency to hear sequences, sufficiently separated in frequency, split into separate cohesive streams (auditory streaming. The second has focused on our ability to detect a change in one sequence, ignoring all others (auditory masking. The two phenomena are clearly related, but that relation has never been evaluated analytically. This article offers a detection-theoretic analysis of the relation between multitone streaming and masking that underscores the expected similarities and differences between these phenomena and the predicted outcome of experiments in each case. The key to establishing this relation is the function linking performance to the information divergence of the tone sequences, DKL (a measure of the statistical separation of their parameters. A strong prediction is that streaming and masking of tones will be a common function of DKL provided that the statistical properties of sequences are symmetric. Results of experiments are reported supporting this prediction.

  13. Attentional Resources Are Needed for Auditory Stream Segregation in Aging

    Directory of Open Access Journals (Sweden)

    Elizabeth Dinces

    2017-12-01

    Full Text Available The ability to select sound streams from background noise becomes challenging with age, even with normal peripheral auditory functioning. Reduced stream segregation ability has been reported in older compared to younger adults. However, the reason why there is a difference is still unknown. The current study investigated the hypothesis that automatic sound processing is impaired with aging, which then contributes to difficulty actively selecting subsets of sounds in noisy environments. We presented a simple intensity oddball sequence in various conditions with irrelevant background sounds while recording EEG. The ability to detect the oddball tones was dependent on the ability to automatically or actively segregate the sounds to frequency streams. Listeners were able to actively segregate sounds to perform the loudness detection task, but there was no indication of automatic segregation of background sounds while watching a movie. Thus, our results indicate impaired automatic processes in aging that may explain more effortful listening, and that tax attentional systems when selecting sound streams in noisy environments.

  14. An objective measure of auditory stream segregation based on molecular psychophysics.

    Science.gov (United States)

    Oberfeld, Daniel

    2014-04-01

    Auditory stream segregation is an important paradigm in the study of auditory scene analysis. Performance-based measures of auditory stream segregation have received increasing use as a complement to subjective reports of streaming. For example, the sensitivity in discriminating a temporal shift imposed on one B tone in an ABA sequence consisting of A and B tones that differ in frequency is often used to infer the perceptual organization (one stream vs. two streams). Limitations of these measures are discussed here, and an alternative measure based on the combination of decision weights and sensitivity is suggested. In the experiment, for ABA and ABB sequences varying in tempo (fast/slow) and duration (long/short), the sensitivity (d') in the temporal shift discrimination task did not differ between fast and slow sequences, despite strong differences in perceptual organization. The decision weights assigned to within-stream and between-stream interonset intervals also deviated from the idealized pattern of near-exclusive reliance on between-stream information in the subjectively integrated case, and on within-stream information in the subjectively segregated case. However, an estimate of internal noise computed using a combination of the estimated decision weights and sensitivity differentiated between sequences that were predominantly perceived as integrated or segregated, with significantly higher internal noise estimates for the segregated case. Therefore, the method of using a combination of decision weights and sensitivity provides a measure of auditory stream segregation that overcomes some of the limitations of purely sensitivity-based measures.

  15. Modelling the Emergence and Dynamics of Perceptual Organisation in Auditory Streaming

    Science.gov (United States)

    Mill, Robert W.; Bőhm, Tamás M.; Bendixen, Alexandra; Winkler, István; Denham, Susan L.

    2013-01-01

    Many sound sources can only be recognised from the pattern of sounds they emit, and not from the individual sound events that make up their emission sequences. Auditory scene analysis addresses the difficult task of interpreting the sound world in terms of an unknown number of discrete sound sources (causes) with possibly overlapping signals, and therefore of associating each event with the appropriate source. There are potentially many different ways in which incoming events can be assigned to different causes, which means that the auditory system has to choose between them. This problem has been studied for many years using the auditory streaming paradigm, and recently it has become apparent that instead of making one fixed perceptual decision, given sufficient time, auditory perception switches back and forth between the alternatives—a phenomenon known as perceptual bi- or multi-stability. We propose a new model of auditory scene analysis at the core of which is a process that seeks to discover predictable patterns in the ongoing sound sequence. Representations of predictable fragments are created on the fly, and are maintained, strengthened or weakened on the basis of their predictive success, and conflict with other representations. Auditory perceptual organisation emerges spontaneously from the nature of the competition between these representations. We present detailed comparisons between the model simulations and data from an auditory streaming experiment, and show that the model accounts for many important findings, including: the emergence of, and switching between, alternative organisations; the influence of stimulus parameters on perceptual dominance, switching rate and perceptual phase durations; and the build-up of auditory streaming. The principal contribution of the model is to show that a two-stage process of pattern discovery and competition between incompatible patterns can account for both the contents (perceptual organisations) and the

  16. Auditory object formation affects modulation perception

    DEFF Research Database (Denmark)

    Piechowiak, Tobias

    2005-01-01

    the target sound in time determine whether or not across-frequency modulation effects are observed. The results suggest that the binding of sound elements into coherent auditory objects precedes aspects of modulation analysis and imply a cortical locus involving integration times of several hundred...

  17. Hemispheric asymmetry in the auditory facilitation effect in dual-stream rapid serial visual presentation tasks.

    Directory of Open Access Journals (Sweden)

    Yasuhiro Takeshima

    Full Text Available Even though auditory stimuli do not directly convey information related to visual stimuli, they often improve visual detection and identification performance. Auditory stimuli often alter visual perception depending on the reliability of the sensory input, with visual and auditory information reciprocally compensating for ambiguity in the other sensory domain. Perceptual processing is characterized by hemispheric asymmetry. While the left hemisphere is more involved in linguistic processing, the right hemisphere dominates spatial processing. In this context, we hypothesized that an auditory facilitation effect in the right visual field for the target identification task, and a similar effect would be observed in the left visual field for the target localization task. In the present study, we conducted target identification and localization tasks using a dual-stream rapid serial visual presentation. When two targets are embedded in a rapid serial visual presentation stream, the target detection or discrimination performance for the second target is generally lower than for the first target; this deficit is well known as attentional blink. Our results indicate that auditory stimuli improved target identification performance for the second target within the stream when visual stimuli were presented in the right, but not the left visual field. In contrast, auditory stimuli improved second target localization performance when visual stimuli were presented in the left visual field. An auditory facilitation effect was observed in perceptual processing, depending on the hemispheric specialization. Our results demonstrate a dissociation between the lateral visual hemifield in which a stimulus is projected and the kind of visual judgment that may benefit from the presentation of an auditory cue.

  18. Comparison of perceptual properties of auditory streaming between spectral and amplitude modulation domains.

    Science.gov (United States)

    Yamagishi, Shimpei; Otsuka, Sho; Furukawa, Shigeto; Kashino, Makio

    2017-07-01

    The two-tone sequence (ABA_), which comprises two different sounds (A and B) and a silent gap, has been used to investigate how the auditory system organizes sequential sounds depending on various stimulus conditions or brain states. Auditory streaming can be evoked by differences not only in the tone frequency ("spectral cue": ΔFTONE, TONE condition) but also in the amplitude modulation rate ("AM cue": ΔFAM, AM condition). The aim of the present study was to explore the relationship between the perceptual properties of auditory streaming for the TONE and AM conditions. A sequence with a long duration (400 repetitions of ABA_) was used to examine the property of the bistability of streaming. The ratio of feature differences that evoked an equivalent probability of the segregated percept was close to the ratio of the Q-values of the auditory and modulation filters, consistent with a "channeling theory" of auditory streaming. On the other hand, for values of ΔFAM and ΔFTONE evoking equal probabilities of the segregated percept, the number of perceptual switches was larger for the TONE condition than for the AM condition, indicating that the mechanism(s) that determine the bistability of auditory streaming are different between or sensitive to the two domains. Nevertheless, the number of switches for individual listeners was positively correlated between the spectral and AM domains. The results suggest a possibility that the neural substrates for spectral and AM processes share a common switching mechanism but differ in location and/or in the properties of neural activity or the strength of internal noise at each level. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. Segregation and integration of auditory streams when listening to multi-part music.

    Science.gov (United States)

    Ragert, Marie; Fairhurst, Merle T; Keller, Peter E

    2014-01-01

    In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment) or temporally (asynchronies vs. no asynchronies between parts), and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads) the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of auditory streams

  20. Segregation and integration of auditory streams when listening to multi-part music.

    Directory of Open Access Journals (Sweden)

    Marie Ragert

    Full Text Available In our daily lives, auditory stream segregation allows us to differentiate concurrent sound sources and to make sense of the scene we are experiencing. However, a combination of segregation and the concurrent integration of auditory streams is necessary in order to analyze the relationship between streams and thus perceive a coherent auditory scene. The present functional magnetic resonance imaging study investigates the relative role and neural underpinnings of these listening strategies in multi-part musical stimuli. We compare a real human performance of a piano duet and a synthetic stimulus of the same duet in a prioritized integrative attention paradigm that required the simultaneous segregation and integration of auditory streams. In so doing, we manipulate the degree to which the attended part of the duet led either structurally (attend melody vs. attend accompaniment or temporally (asynchronies vs. no asynchronies between parts, and thus the relative contributions of integration and segregation used to make an assessment of the leader-follower relationship. We show that perceptually the relationship between parts is biased towards the conventional structural hierarchy in western music in which the melody generally dominates (leads the accompaniment. Moreover, the assessment varies as a function of both cognitive load, as shown through difficulty ratings and the interaction of the temporal and the structural relationship factors. Neurally, we see that the temporal relationship between parts, as one important cue for stream segregation, revealed distinct neural activity in the planum temporale. By contrast, integration used when listening to both the temporally separated performance stimulus and the temporally fused synthetic stimulus resulted in activation of the intraparietal sulcus. These results support the hypothesis that the planum temporale and IPS are key structures underlying the mechanisms of segregation and integration of

  1. Membrane potential dynamics of populations of cortical neurons during auditory streaming

    Science.gov (United States)

    Farley, Brandon J.

    2015-01-01

    How a mixture of acoustic sources is perceptually organized into discrete auditory objects remains unclear. One current hypothesis postulates that perceptual segregation of different sources is related to the spatiotemporal separation of cortical responses induced by each acoustic source or stream. In the present study, the dynamics of subthreshold membrane potential activity were measured across the entire tonotopic axis of the rodent primary auditory cortex during the auditory streaming paradigm using voltage-sensitive dye imaging. Consistent with the proposed hypothesis, we observed enhanced spatiotemporal segregation of cortical responses to alternating tone sequences as their frequency separation or presentation rate was increased, both manipulations known to promote stream segregation. However, across most streaming paradigm conditions tested, a substantial cortical region maintaining a response to both tones coexisted with more peripheral cortical regions responding more selectively to one of them. We propose that these coexisting subthreshold representation types could provide neural substrates to support the flexible switching between the integrated and segregated streaming percepts. PMID:26269558

  2. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams.

    Science.gov (United States)

    Bezgin, Gleb; Rybacki, Konrad; van Opstal, A John; Bakker, Rembrandt; Shen, Kelly; Vakorin, Vasily A; McIntosh, Anthony R; Kötter, Rolf

    2014-08-01

    Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the visual cortex, which features rather prominent segregation into spatial and non-spatial domains. It has been hypothesised that other sensory systems, including auditory, are organised in a similar way on the cortical level. Recent studies offer rich qualitative evidence for the dual stream hypothesis. Here we provide a new paradigm to quantitatively uncover these patterns in the auditory system, based on an analysis of multiple anatomical studies using multivariate techniques. As a test case, we also apply our assessment techniques to more ubiquitously-explored visual system. Importantly, the introduced framework opens the possibility for these techniques to be applied to other neural systems featuring a dichotomised organisation, such as language or music perception. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. The Build-up of Auditory Stream Segregation: A Different Perspective.

    Science.gov (United States)

    Deike, Susann; Heil, Peter; Böckmann-Barthel, Martin; Brechmann, André

    2012-01-01

    The build-up of auditory stream segregation refers to the notion that sequences of alternating A and B sounds initially tend to be heard as a single stream, but with time appear to split into separate streams. The central assumption in the analysis of this phenomenon is that streaming sequences are perceived as one stream at the beginning by default. In the present study, we test the validity of this assumption and document its impact on the apparent build-up phenomenon. Human listeners were presented with ABAB sequences, where A and B were harmonic tone complexes of seven different fundamental frequency separations (Δf) ranging from 2 to 14 semitones. Subjects had to indicate, as promptly as possible, their initial percept of the sequences, as either "one stream" or "two streams," and any changes thereof during the sequences. We found that subjects did not generally indicate a one-stream percept at the beginning of streaming sequences. Instead, the first perceptual decision depended on Δf, with the probability of a one-stream percept decreasing, and that of a two-stream percept increasing, with increasing Δf. Furthermore, subjects required some time to make and report a decision on their perceptual organization. Taking this time into account, the resulting time courses of two-stream probabilities differ markedly from those suggested by the conventional analysis. A build-up-like increase in two-stream probability was found only for the Δf of six semitones. At the other Δf conditions no or only minor increases in two-stream probability occurred. These results shed new light on the build-up of stream segregation and its possible neural correlates.

  4. The build-up of auditory stream segregation: a different perspective.

    Directory of Open Access Journals (Sweden)

    Susann eDeike

    2012-10-01

    Full Text Available The build-up of auditory stream segregation refers to the notion that sequences of alternating A and B sounds initially tend to be heard as a single stream, but with time appear to split into separate streams. The central assumption in the analysis of this phenomenon is that streaming sequences are perceived as one stream at the beginning by default. In the present study, we test the validity of this assumption and document its impact on the apparent build-up phenomenon. Human listeners were presented with ABAB sequences, where A and B were harmonic tone complexes of seven different fundamental frequency separations (∆f ranging from 2 to 14 semitones. Subjects had to indicate, as promptly as possible, their initial percept of the sequences, as either one stream or two streams, and any changes thereof during the sequences. We found that subjects did not generally indicate a one-stream percept at the beginning of streaming sequences. Instead, the first perceptual decision depended on ∆f, with the probability of a one-stream percept decreasing, and that of a two-stream percept increasing, with increasing ∆f. Furthermore, subjects required some time to make and report a decision on their perceptual organization. Taking this time into account, the resulting time courses of two-stream probabilities differ markedly from those suggested by the conventional analysis. A build-up-like increase in two-stream probability was found only for the ∆f of 6 semitones. At the other ∆f conditions no or only minor increases in two-stream probability occurred. These results shed new light on the build-up of stream segregation and its possible neural correlates.

  5. Auditory, Visual and Audiovisual Speech Processing Streams in Superior Temporal Sulcus.

    Science.gov (United States)

    Venezia, Jonathan H; Vaden, Kenneth I; Rong, Feng; Maddox, Dale; Saberi, Kourosh; Hickok, Gregory

    2017-01-01

    The human superior temporal sulcus (STS) is responsive to visual and auditory information, including sounds and facial cues during speech recognition. We investigated the functional organization of STS with respect to modality-specific and multimodal speech representations. Twenty younger adult participants were instructed to perform an oddball detection task and were presented with auditory, visual, and audiovisual speech stimuli, as well as auditory and visual nonspeech control stimuli in a block fMRI design. Consistent with a hypothesized anterior-posterior processing gradient in STS, auditory, visual and audiovisual stimuli produced the largest BOLD effects in anterior, posterior and middle STS (mSTS), respectively, based on whole-brain, linear mixed effects and principal component analyses. Notably, the mSTS exhibited preferential responses to multisensory stimulation, as well as speech compared to nonspeech. Within the mid-posterior and mSTS regions, response preferences changed gradually from visual, to multisensory, to auditory moving posterior to anterior. Post hoc analysis of visual regions in the posterior STS revealed that a single subregion bordering the mSTS was insensitive to differences in low-level motion kinematics yet distinguished between visual speech and nonspeech based on multi-voxel activation patterns. These results suggest that auditory and visual speech representations are elaborated gradually within anterior and posterior processing streams, respectively, and may be integrated within the mSTS, which is sensitive to more abstract speech information within and across presentation modalities. The spatial organization of STS is consistent with processing streams that are hypothesized to synthesize perceptual speech representations from sensory signals that provide convergent information from visual and auditory modalities.

  6. A physiologically inspired model of auditory stream segregation based on a temporal coherence analysis

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2012-01-01

    The ability to perceptually separate acoustic sources and focus one’s attention on a single source at a time is essential for our ability to use acoustic information. In this study, a physiologically inspired model of human auditory processing [M. L. Jepsen and T. Dau, J. Acoust. Soc. Am. 124, 422......-438, (2008)] was used as a front end of a model for auditory stream segregation. A temporal coherence analysis [M. Elhilali, C. Ling, C. Micheyl, A. J. Oxenham and S. Shamma, Neuron. 61, 317-329, (2009)] was applied at the output of the preprocessing, using the coherence across tonotopic channels to group...

  7. The mismatch negativity as a measure of auditory stream segregation in a simulated "cocktail-party" scenario: effect of age.

    Science.gov (United States)

    Getzmann, Stephan; Näätänen, Risto

    2015-11-01

    With age the ability to understand speech in multitalker environments usually deteriorates. The central auditory system has to perceptually segregate and group the acoustic input into sequences of distinct auditory objects. The present study used electrophysiological measures to study effects of age on auditory stream segregation in a multitalker scenario. Younger and older adults were presented with streams of short speech stimuli. When a single target stream was presented, the occurrence of a rare (deviant) syllable among a frequent (standard) syllable elicited the mismatch negativity (MMN), an electrophysiological correlate of automatic deviance detection. The presence of a second, concurrent stream consisting of the deviant syllable of the target stream reduced the MMN amplitude, especially when located nearby the target stream. The decrease in MMN amplitude indicates that the rare syllable of the target stream was less perceived as deviant, suggesting reduced stream segregation with decreasing stream distance. Moreover, the presence of a concurrent stream increased the MMN peak latency of the older group but not that of the younger group. The results provide neurophysiological evidence for the effects of concurrent speech on auditory processing in older adults, suggesting that older adults need more time for stream segregation in the presence of concurrent speech. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Effects of tonotopicity, adaptation, modulation tuning, and temporal coherence in “primitive” auditory stream segregation

    DEFF Research Database (Denmark)

    Christiansen, Simon Krogholt; Jepsen, Morten Løve; Dau, Torsten

    2014-01-01

    The perceptual organization of two-tone sequences into auditory streams was investigated using a modeling framework consisting of an auditory pre-processing front end [Dau et al., J. Acoust. Soc. Am. 102, 2892–2905 (1997)] combined with a temporal coherence-analysis back end [Elhilali et al......., Neuron 61, 317–329 (2009)]. Two experimental paradigms were considered: (i) Stream segregation as a function of tone repetition time (TRT) and frequency separation (Df) and (ii) grouping of distant spectral components based on onset/offset synchrony. The simulated and experimental results of the present...... study supported the hypothesis that forward masking enhances the ability to perceptually segregate spectrally close tone sequences. Furthermore, the modeling suggested that effects of neural adaptation and processing though modulation-frequency selective filters may enhance the sensitivity to onset...

  9. Neurodynamics for auditory stream segregation: tracking sounds in the mustached bat's natural environment.

    Science.gov (United States)

    Kanwal, Jagmeet S; Medvedev, Andrei V; Micheyl, Christophe

    2003-08-01

    During navigation and the search phase of foraging, mustached bats emit approximately 25 ms long echolocation pulses (at 10-40 Hz) that contain multiple harmonics of a constant frequency (CF) component followed by a short (3 ms) downward frequency modulation. In the context of auditory stream segregation, therefore, bats may either perceive a coherent pulse-echo sequence (PEPE...), or segregated pulse and echo streams (P-P-P... and E-E-E...). To identify the neural mechanisms for stream segregation in bats, we developed a simple yet realistic neural network model with seven layers and 420 nodes. Our model required recurrent and lateral inhibition to enable output nodes in the network to 'latch-on' to a single tone (corresponding to a CF component in either the pulse or echo), i.e., exhibit differential suppression by the alternating two tones presented at a high rate (> 10 Hz). To test the applicability of our model to echolocation, we obtained neurophysiological data from the primary auditory cortex of awake mustached bats. Event-related potentials reliably reproduced the latching behaviour observed at output nodes in the network. Pulse as well as nontarget (clutter) echo CFs facilitated this latching. Individual single unit responses were erratic, but when summed over several recording sites, they also exhibited reliable latching behaviour even at 40 Hz. On the basis of these findings, we propose that a neural correlate of auditory stream segregation is present within localized synaptic activity in the mustached bat's auditory cortex and this mechanism may enhance the perception of echolocation sounds in the natural environment.

  10. Auditory stream segregation using bandpass noises: evidence from event-related potentials

    Directory of Open Access Journals (Sweden)

    Yingjiu eNie

    2014-09-01

    Full Text Available The current study measured neural responses to investigate auditory stream segregation of noise stimuli with or without clear spectral contrast. Sequences of alternating A and B noise bursts were presented to elicit stream segregation in normal-hearing listeners. The successive B bursts in each sequence maintained an equal amount of temporal separation with manipulations introduced on the last stimulus. The last B burst was either delayed for 50% of the sequences or not delayed for the other 50%. The A bursts were jittered in between every two adjacent B bursts. To study the effects of spectral separation on streaming, the A and B bursts were further manipulated by using either bandpass-filtered noises widely spaced in center frequency or broadband noises. Event-related potentials (ERPs to the last B bursts were analyzed to compare the neural responses to the delay vs. no-delay trials in both passive and attentive listening conditions. In the passive listening condition, a trend for a possible late mismatch negativity (MMN or late discriminative negativity (LDN response was observed only when the A and B bursts were spectrally separate, suggesting that spectral separation in the A and B burst sequences could be conducive to stream segregation at the pre-attentive level. In the attentive condition, a P300 response was consistently elicited regardless of whether there was spectral separation between the A and B bursts, indicating the facilitative role of voluntary attention in stream segregation. The results suggest that reliable ERP measures can be used as indirect indicators for auditory stream segregation in conditions of weak spectral contrast. These findings have important implications for cochlear implant (CI studies – as spectral information available through a CI device or simulation is substantially degraded, it may require more attention to achieve stream segregation.

  11. Segregated in perception, integrated for action: immunity of rhythmic sensorimotor coordination to auditory stream segregation.

    Science.gov (United States)

    Repp, Bruno H

    2009-03-01

    Auditory stream segregation can occur when tones of different pitch (A, B) are repeated cyclically: The larger the pitch separation and the faster the tempo, the more likely perception of two separate streams is to occur. The present study assessed stream segregation in perceptual and sensorimotor tasks, using identical ABBABB ... sequences. The perceptual task required detection of single phase-shifted A tones; this was expected to be facilitated by the presence of B tones unless segregation occurred. The sensorimotor task required tapping in synchrony with the A tones; here the phase correction response (PCR) to shifted A tones was expected to be inhibited by B tones unless segregation occurred. Two sequence tempi and three pitch separations (2, 10, and 48 semitones) were used with musically trained participants. Facilitation of perception occurred only at the smallest pitch separation, whereas the PCR was reduced equally at all separations. These results indicate that auditory action control is immune to perceptual stream segregation, at least in musicians. This may help musicians coordinate with diverse instruments in ensemble playing.

  12. Using a staircase procedure for the objective measurement of auditory stream integration and segregation thresholds

    Directory of Open Access Journals (Sweden)

    Mona Isabel Spielmann

    2013-08-01

    Full Text Available Auditory scene analysis describes the ability to segregate relevant sounds out from the environment and to integrate them into a single sound stream using the characteristics of the sounds to determine whether or not they are related. This study aims to contrast task performances in objective threshold measurements of segregation and integration using identical stimuli, manipulating two variables known to influence streaming, inter-stimulus-interval (ISI and frequency difference (Δf. For each measurement, one parameter (either ISI or Δf was held constant while the other was altered in a staircase procedure. By using this paradigm, it is possible to test within-subject across multiple conditions, covering a wide Δf and ISI range in one testing session. The objective tasks were based on across-stream temporal judgments (facilitated by integration and within-stream deviance detection (facilitated by segregation. Results show the objective integration task is well suited for combination with the staircase procedure, as it yields consistent threshold measurements for separate variations of ISI and Δf, as well as being significantly related to the subjective thresholds. The objective segregation task appears less suited to the staircase procedure. With the integration-based staircase paradigm, a comprehensive assessment of streaming thresholds can be obtained in a relatively short space of time. This permits efficient threshold measurements particularly in groups for which there is little prior knowledge on the relevant parameter space for streaming perception.

  13. Auditory stream formation affects comodulation masking release retroactively

    DEFF Research Database (Denmark)

    Dau, Torsten; Ewert, Stephan; Oxenham, A. J.

    2009-01-01

    . Detection thresholds for a 1-kHz sinusoidal signal were measured in the presence of a narrowband (20-Hz-wide) on-frequency masker with or without four comodulated or independent flanking bands that were spaced apart by either 1/6 (narrow spacing) or 1 octave (wide spacing). As expected, CMR was observed...

  14. Evaluating auditory stream segregation of SAM tone sequences by subjective and objective psychoacoustical tasks, and brain activity

    Directory of Open Access Journals (Sweden)

    Lena-Vanessa eDollezal

    2014-06-01

    Full Text Available Auditory stream segregation refers to a segregated percept of signal streams with different acoustic features. Different approaches have been pursued in studies of stream segregation. In psychoacoustics, stream segregation has mostly been investigated with a subjective task asking the subjects to report their percept. Few studies have applied an objective task in which stream segregation is evaluated indirectly by determining thresholds for a percept that depends on whether auditory streams are segregated or not. Furthermore, both perceptual measures and physiological measures of brain activity have been employed but only little is known about their relation. How the results from different tasks and measures are related is evaluated in the present study using examples relying on the ABA- stimulation paradigm that apply the same stimuli. We presented A and B signals that were sinusoidally amplitude modulated (SAM tones providing purely temporal, spectral or both types of cues to evaluate perceptual stream segregation and its physiological correlate. Which types of cues are most prominent was determined by the choice of carrier and modulation frequencies (fmod of the signals. In the subjective task subjects reported their percept and in the objective task we measured their sensitivity for detecting time-shifts of B signals in an ABA- sequence. As a further measure of processes underlying stream segregation we employed functional magnetic resonance imaging (fMRI. SAM tone parameters were chosen to evoke an integrated (1-stream, a segregated (2-stream or an ambiguous percept by adjusting the fmod difference between A and B tones (∆fmod. The results of both psychoacoustical tasks are significantly correlated. BOLD responses in fMRI depend on ∆fmod between A and B SAM tones. The effect of ∆fmod, however, differs between auditory cortex and frontal regions suggesting differences in representation related to the degree of perceptual ambiguity of

  15. An Objective Measurement of the Build-Up of Auditory Streaming and of Its Modulation by Attention

    Science.gov (United States)

    Thompson, Sarah K.; Carlyon, Robert P.; Cusack, Rhodri

    2011-01-01

    Three experiments studied auditory streaming using sequences of alternating "ABA" triplets, where "A" and "B" were 50-ms tones differing in frequency by [delta]f semitones and separated by 75-ms gaps. Experiment 1 showed that detection of a short increase in the gap between a B tone and the preceding A tone, imposed on one ABA triplet, was better…

  16. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli

    Science.gov (United States)

    Hill, N. J.; Schölkopf, B.

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users—for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare ‘oddball’ stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  17. An online brain-computer interface based on shifting attention to concurrent streams of auditory stimuli.

    Science.gov (United States)

    Hill, N J; Schölkopf, B

    2012-04-01

    We report on the development and online testing of an electroencephalogram-based brain-computer interface (BCI) that aims to be usable by completely paralysed users-for whom visual or motor-system-based BCIs may not be suitable, and among whom reports of successful BCI use have so far been very rare. The current approach exploits covert shifts of attention to auditory stimuli in a dichotic-listening stimulus design. To compare the efficacy of event-related potentials (ERPs) and steady-state auditory evoked potentials (SSAEPs), the stimuli were designed such that they elicited both ERPs and SSAEPs simultaneously. Trial-by-trial feedback was provided online, based on subjects' modulation of N1 and P3 ERP components measured during single 5 s stimulation intervals. All 13 healthy subjects were able to use the BCI, with performance in a binary left/right choice task ranging from 75% to 96% correct across subjects (mean 85%). BCI classification was based on the contrast between stimuli in the attended stream and stimuli in the unattended stream, making use of every stimulus, rather than contrasting frequent standard and rare 'oddball' stimuli. SSAEPs were assessed offline: for all subjects, spectral components at the two exactly known modulation frequencies allowed discrimination of pre-stimulus from stimulus intervals, and of left-only stimuli from right-only stimuli when one side of the dichotic stimulus pair was muted. However, attention modulation of SSAEPs was not sufficient for single-trial BCI communication, even when the subject's attention was clearly focused well enough to allow classification of the same trials via ERPs. ERPs clearly provided a superior basis for BCI. The ERP results are a promising step towards the development of a simple-to-use, reliable yes/no communication system for users in the most severely paralysed states, as well as potential attention-monitoring and -training applications outside the context of assistive technology.

  18. Communication and control by listening: towards optimal design of a two-class auditory streaming brain-computer interface

    Directory of Open Access Journals (Sweden)

    N. Jeremy Hill

    2012-12-01

    Full Text Available Most brain-computer interface (BCI systems require users to modulate brain signals in response to visual stimuli. Thus, they may not be useful to people with limited vision, such as those with severe paralysis. One important approach for overcoming this issue is auditory streaming, an approach whereby a BCI system is driven by shifts of attention between two dichotically presented auditory stimulus streams. Motivated by the long-term goal of translating such a system into a reliable, simple yes-no interface for clinical usage, we aim to answer two main questions. First, we asked which of two previously-published variants provides superior performance: a fixed-phase (FP design in which the streams have equal period and opposite phase, or a drifting-phase (DP design where the periods are unequal. We found FP to be superior to DP (p = 0.002: average performance levels were 80% and 72% correct, respectively. We were also able to show, in a pilot with one subject, that auditory streaming can support continuous control and neurofeedback applications: by shifting attention between ongoing left and right auditory streams, the subject was able to control the position of a paddle in a computer game. Second, we examined whether the system is dependent on eye movements, since it is known that eye movements and auditory attention may influence each other, and any dependence on the ability to move one’s eyes would be a barrier to translation to paralyzed users. We discovered that, despite instructions, some subjects did make eye movements that were indicative of the direction of attention. However, there was no correlation, across subjects, between the reliability of the eye movement signal and the reliability of the BCI system, indicating that our system was configured to work independently of eye movement. Together, these findings are an encouraging step forward toward BCIs that provide practical communication and control options for the most severely

  19. Neural Correlates of Auditory Processing, Learning and Memory Formation in Songbirds

    Science.gov (United States)

    Pinaud, R.; Terleph, T. A.; Wynne, R. D.; Tremere, L. A.

    Songbirds have emerged as powerful experimental models for the study of auditory processing of complex natural communication signals. Intact hearing is necessary for several behaviors in developing and adult animals including vocal learning, territorial defense, mate selection and individual recognition. These behaviors are thought to require the processing, discrimination and memorization of songs. Although much is known about the brain circuits that participate in sensorimotor (auditory-vocal) integration, especially the ``song-control" system, less is known about the anatomical and functional organization of central auditory pathways. Here we discuss findings associated with a telencephalic auditory area known as the caudomedial nidopallium (NCM). NCM has attracted significant interest as it exhibits functional properties that may support higher order auditory functions such as stimulus discrimination and the formation of auditory memories. NCM neurons are vigorously dr iven by auditory stimuli. Interestingly, these responses are selective to conspecific, relative to heterospecific songs and artificial stimuli. In addition, forms of experience-dependent plasticity occur in NCM and are song-specific. Finally, recent experiments employing high-throughput quantitative proteomics suggest that complex protein regulatory pathways are engaged in NCM as a result of auditory experience. These molecular cascades are likely central to experience-associated plasticity of NCM circuitry and may be part of a network of calcium-driven molecular events that support the formation of auditory memory traces.

  20. Wireless network interface energy consumption implications of popular streaming formats

    Science.gov (United States)

    Chandra, Surendar

    2001-12-01

    With the proliferation of mobile streaming multimedia, available battery capacity constrains the end-user experience. Since streaming applications tend to be long running, wireless network interface card's (WNIC) energy consumption is particularly an acute problem. In this work, we explore the WNIC energy consumption implications of popular multimedia streaming formats from Microsoft (Windows media), Real (Real media) and Apple (Quick Time). We investigate the energy consumption under varying stream bandwidth and network loss rates. We also explore history-based client-side strategies to reduce the energy consumed by transitioning the WNICs to a lower power consuming sleep state. We show that Microsoft media tends to transmit packets at regular intervals; streams optimized for 28.8 Kbps can save over 80% in energy consumption with 2% data loss. A high bandwidth stream (768 Kbps) can still save 57% in energy consumption with less than 0.3% data loss. For high bandwidth streams, Microsoft media exploits network-level packet fragmentation, which can lead to excessive packet loss (and wasted energy) in a lossy network. Real stream packets tend to be sent closer to each other, especially at higher bandwidths. Quicktime packets sometimes arrive in quick succession; most likely an application level fragmentation mechanism. Such packets are harder to predict at the network level without understanding the packet semantics.

  1. Dynamics of distraction: competition among auditory streams modulates gain and disrupts inter-trial phase coherence in the human electroencephalogram.

    Directory of Open Access Journals (Sweden)

    Karla D Ponjavic-Conte

    Full Text Available Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech or low (white noise distraction. High-distraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.

  2. Dynamics of distraction: competition among auditory streams modulates gain and disrupts inter-trial phase coherence in the human electroencephalogram.

    Science.gov (United States)

    Ponjavic-Conte, Karla D; Hambrook, Dillon A; Pavlovic, Sebastian; Tata, Matthew S

    2013-01-01

    Auditory distraction is a failure to maintain focus on a stream of sounds. We investigated the neural correlates of distraction in a selective-listening pitch-discrimination task with high (competing speech) or low (white noise) distraction. High-distraction impaired performance and reduced the N1 peak of the auditory Event-Related Potential evoked by probe tones. In a series of simulations, we explored two theories to account for this effect: disruption of sensory gain or a disruption of inter-trial phase consistency. When compared to these simulations, our data were consistent with both effects of distraction. Distraction reduced the gain of the auditory evoked potential and disrupted the inter-trial phase consistency with which the brain responds to stimulus events. Tones at a non-target, unattended frequency were more susceptible to the effects of distraction than tones within an attended frequency band.

  3. Droplet and cluster formation in freely falling granular streams.

    Science.gov (United States)

    Waitukaitis, Scott R; Grütjen, Helge F; Royer, John R; Jaeger, Heinrich M

    2011-05-01

    Particle beams are important tools for probing atomic and molecular interactions. Here we demonstrate that particle beams also offer a unique opportunity to investigate interactions in macroscopic systems, such as granular media. Motivated by recent experiments on streams of grains that exhibit liquid-like breakup into droplets, we use molecular dynamics simulations to investigate the evolution of a dense stream of macroscopic spheres accelerating out of an opening at the bottom of a reservoir. We show how nanoscale details associated with energy dissipation during collisions modify the stream's macroscopic behavior. We find that inelastic collisions collimate the stream, while the presence of short-range attractive interactions drives structure formation. Parameterizing the collision dynamics by the coefficient of restitution (i.e., the ratio of relative velocities before and after impact) and the strength of the cohesive interaction, we map out a spectrum of behaviors that ranges from gaslike jets in which all grains drift apart to liquid-like streams that break into large droplets containing hundreds of grains. We also find a new, intermediate regime in which small aggregates form by capture from the gas phase, similar to what can be observed in molecular beams. Our results show that nearly all aspects of stream behavior are closely related to the velocity gradient associated with vertical free fall. Led by this observation, we propose a simple energy balance model to explain the droplet formation process. The qualitative as well as many quantitative features of the simulations and the model compare well with available experimental data and provide a first quantitative measure of the role of attractions in freely cooling granular streams.

  4. The effect of visual cues on auditory stream segregation in musicians and non-musicians.

    Directory of Open Access Journals (Sweden)

    Jeremy Marozeau

    Full Text Available BACKGROUND: The ability to separate two interleaved melodies is an important factor in music appreciation. This ability is greatly reduced in people with hearing impairment, contributing to difficulties in music appreciation. The aim of this study was to assess whether visual cues, musical training or musical context could have an effect on this ability, and potentially improve music appreciation for the hearing impaired. METHODS: Musicians (N = 18 and non-musicians (N = 19 were asked to rate the difficulty of segregating a four-note repeating melody from interleaved random distracter notes. Visual cues were provided on half the blocks, and two musical contexts were tested, with the overlap between melody and distracter notes either gradually increasing or decreasing. CONCLUSIONS: Visual cues, musical training, and musical context all affected the difficulty of extracting the melody from a background of interleaved random distracter notes. Visual cues were effective in reducing the difficulty of segregating the melody from distracter notes, even in individuals with no musical training. These results are consistent with theories that indicate an important role for central (top-down processes in auditory streaming mechanisms, and suggest that visual cues may help the hearing-impaired enjoy music.

  5. Auditory streaming by phase relations between components of harmonic complexes: a comparative study of human subjects and bird forebrain neurons.

    Science.gov (United States)

    Dolležal, Lena-Vanessa; Itatani, Naoya; Günther, Stefanie; Klump, Georg M

    2012-12-01

    Auditory streaming describes a percept in which a sequential series of sounds either is segregated into different streams or is integrated into one stream based on differences in their spectral or temporal characteristics. This phenomenon has been analyzed in human subjects (psychophysics) and European starlings (neurophysiology), presenting harmonic complex (HC) stimuli with different phase relations between their frequency components. Such stimuli allow evaluating streaming by temporal cues, as these stimuli only vary in the temporal waveform but have identical amplitude spectra. The present study applied the commonly used ABA- paradigm (van Noorden, 1975) and matched stimulus sets in psychophysics and neurophysiology to evaluate the effects of fundamental frequency (f₀), frequency range (f(LowCutoff)), tone duration (TD), and tone repetition time (TRT) on streaming by phase relations of the HC stimuli. By comparing the percept of humans with rate or temporal responses of avian forebrain neurons, a neuronal correlate of perceptual streaming of HC stimuli is described. The differences in the pattern of the neurons' spike rate responses provide for a better explanation for the percept observed in humans than the differences in the temporal responses (i.e., the representation of the periodicity in the timing of the action potentials). Especially for HC stimuli with a short 40-ms duration, the differences in the pattern of the neurons' temporal responses failed to represent the patterns of human perception, whereas the neurons' rate responses showed a good match. These results suggest that differential rate responses are a better predictor for auditory streaming by phase relations than temporal responses.

  6. Decoding stimulus identity from multi-unit activity and local field potentials along the ventral auditory stream in the awake primate: implications for cortical neural prostheses

    Science.gov (United States)

    Smith, Elliot; Kellis, Spencer; House, Paul; Greger, Bradley

    2013-02-01

    Objective. Hierarchical processing of auditory sensory information is believed to occur in two streams: a ventral stream responsible for stimulus identity and a dorsal stream responsible for processing spatial elements of a stimulus. The objective of the current study is to examine neural coding in this processing stream in the context of understanding the possibility for an auditory cortical neural prosthesis. Approach. We examined the selectivity for species-specific primate vocalizations in the ventral auditory processing stream by applying a statistical classifier to neural data recorded from microelectrode arrays. Multi-unit activity (MUA) and local field potential (LFP) data recorded simultaneously from primary auditory complex (AI) and rostral parabelt (PBr) were decoded on a trial-by-trial basis. Main results. While decode performance in AI was well above chance, mean performance in PBr did not deviate >15% from chance level. Mean performance levels were similar for MUA and LFP decodes. Increasing the spectral and temporal resolution improved decode performance; while inter-electrode spacing could be as large as 1.14 mm without degrading decode performance. Significance. These results serve as preliminary guidance for a human auditory cortical neural prosthesis; instructing interface implementation, microstimulation patterns and anatomical placement.

  7. Auditory-prefrontal axonal connectivity in the macaque cortex: quantitative assessment of processing streams

    NARCIS (Netherlands)

    Bezgin, G.; Rybacki, K.; Opstal, A.J. van; Bakker, R.; Shen, K.; Vakorin, V.A.; McIntosh, A.R.; Kötter, R.

    2014-01-01

    Primate sensory systems subserve complex neurocomputational functions. Consequently, these systems are organised anatomically in a distributed fashion, commonly linking areas to form specialised processing streams. Each stream is related to a specific function, as evidenced from studies of the

  8. On-Line Statistical Segmentation of a Non-Speech Auditory Stream in Neonates as Demonstrated by Event-Related Brain Potentials

    Science.gov (United States)

    Kudo, Noriko; Nonaka, Yulri; Mizuno, Noriko; Mizuno, Katsumi; Okanoya, Kazuo

    2011-01-01

    The ability to statistically segment a continuous auditory stream is one of the most important preparations for initiating language learning. Such ability is available to human infants at 8 months of age, as shown by a behavioral measurement. However, behavioral study alone cannot determine how early this ability is available. A recent study using…

  9. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  10. Functional dissociation of transient and sustained fMRI BOLD components in human auditory cortex revealed with a streaming paradigm based on interaural time differences.

    Science.gov (United States)

    Schadwinkel, Stefan; Gutschalk, Alexander

    2010-12-01

    A number of physiological studies suggest that feature-selective adaptation is relevant to the pre-processing for auditory streaming, the perceptual separation of overlapping sound sources. Most of these studies are focused on spectral differences between streams, which are considered most important for streaming. However, spatial cues also support streaming, alone or in combination with spectral cues, but physiological studies of spatial cues for streaming remain scarce. Here, we investigate whether the tuning of selective adaptation for interaural time differences (ITD) coincides with the range where streaming perception is observed. FMRI activation that has been shown to adapt depending on the repetition rate was studied with a streaming paradigm where two tones were differently lateralized by ITD. Listeners were presented with five different ΔITD conditions (62.5, 125, 187.5, 343.75, or 687.5 μs) out of an active baseline with no ΔITD during fMRI. The results showed reduced adaptation for conditions with ΔITD ≥ 125 μs, reflected by enhanced sustained BOLD activity. The percentage of streaming perception for these stimuli increased from approximately 20% for ΔITD = 62.5 μs to > 60% for ΔITD = 125 μs. No further sustained BOLD enhancement was observed when the ΔITD was increased beyond ΔITD = 125 μs, whereas the streaming probability continued to increase up to 90% for ΔITD = 687.5 μs. Conversely, the transient BOLD response, at the transition from baseline to ΔITD blocks, increased most prominently as ΔITD was increased from 187.5 to 343.75 μs. These results demonstrate a clear dissociation of transient and sustained components of the BOLD activity in auditory cortex. © 2010 The Authors. European Journal of Neuroscience © 2010 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  11. Memory formation and retrieval of neuronal silencing in the auditory cortex.

    Science.gov (United States)

    Nomura, Hiroshi; Hara, Kojiro; Abe, Reimi; Hitora-Imamura, Natsuko; Nakayama, Ryota; Sasaki, Takuya; Matsuki, Norio; Ikegaya, Yuji

    2015-08-04

    Sensory stimuli not only activate specific populations of cortical neurons but can also silence other populations. However, it remains unclear whether neuronal silencing per se leads to memory formation and behavioral expression. Here we show that mice can report optogenetic inactivation of auditory neuron ensembles by exhibiting fear responses or seeking a reward. Mice receiving pairings of footshock and silencing of a neuronal ensemble exhibited a fear response selectively to the subsequent silencing of the same ensemble. The valence of the neuronal silencing was preserved for at least 30 d and was susceptible to extinction training. When we silenced an ensemble in one side of auditory cortex for conditioning, silencing of an ensemble in another side induced no fear response. We also found that mice can find a reward based on the presence or absence of the silencing. Neuronal silencing was stored as working memory. Taken together, we propose that neuronal silencing without explicit activation in the cerebral cortex is enough to elicit a cognitive behavior.

  12. The formation of direct collapse black holes under the influence of streaming velocities

    Science.gov (United States)

    Schauer, Anna T. P.; Regan, John; Glover, Simon C. O.; Klessen, Ralf S.

    2017-11-01

    We study the influence of a high baryonic streaming velocity on the formation of direct collapse black holes (DCBHs) with the help of cosmological simulations carried out using the moving mesh code arepo. We show that a streaming velocity that is as large as three times the root-mean-squared value is effective at suppressing the formation of H2-cooled minihaloes, while still allowing larger atomic cooling haloes (ACHs) to form. We find that enough H2 forms in the centre of these ACHs to effectively cool the gas, demonstrating that a high streaming velocity by itself cannot produce the conditions required for DCBH formation. However, we argue that high streaming velocity regions do provide an ideal environment for the formation of DCBHs in close pairs of ACHs (the `synchronized halo' model). Due to the absence of star formation in minihaloes, the gas remains chemically pristine until the ACHs form. If two such haloes form with only a small separation in time and space, then the one forming stars earlier can provide enough ultraviolet radiation to suppress H2 cooling in the other, allowing it to collapse to form a DCBH. Baryonic streaming may therefore play a crucial role in the formation of the seeds of the highest redshift quasars.

  13. New stream format: progress report on containing data size explosion

    Science.gov (United States)

    LaCour, Patrick; Reich, Alfred J.; Nakagawa, Kent H.; Schulze, Steffen F.; Grodd, Laurence

    2003-07-01

    The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using existing data formats specifications. The ITRS roadmap indicates that single layer MEBES files in 2002 reached the 50 GB range, worst case. Under the sponsorship of SEMI, a working group was formed to create a new format for use in describing integrated circuit layouts in a more efficient and extendible manner. This paper is a report on the status and potential benefits the new format can deliver.

  14. Repetition suppression and repetition enhancement underlie auditory memory-trace formation in the human brain: an MEG study.

    Science.gov (United States)

    Recasens, Marc; Leung, Sumie; Grimm, Sabine; Nowak, Rafal; Escera, Carles

    2015-03-01

    The formation of echoic memory traces has traditionally been inferred from the enhanced responses to its deviations. The mismatch negativity (MMN), an auditory event-related potential (ERP) elicited between 100 and 250ms after sound deviation is an indirect index of regularity encoding that reflects a memory-based comparison process. Recently, repetition positivity (RP) has been described as a candidate ERP correlate of direct memory trace formation. RP consists of repetition suppression and enhancement effects occurring in different auditory components between 50 and 250ms after sound onset. However, the neuronal generators engaged in the encoding of repeated stimulus features have received little interest. This study intends to investigate the neuronal sources underlying the formation and strengthening of new memory traces by employing a roving-standard paradigm, where trains of different frequencies and different lengths are presented randomly. Source generators of repetition enhanced (RE) and suppressed (RS) activity were modeled using magnetoencephalography (MEG) in healthy subjects. Our results show that, in line with RP findings, N1m (~95-150ms) activity is suppressed with stimulus repetition. In addition, we observed the emergence of a sustained field (~230-270ms) that showed RE. Source analysis revealed neuronal generators of RS and RE located in both auditory and non-auditory areas, like the medial parietal cortex and frontal areas. The different timing and location of neural generators involved in RS and RE points to the existence of functionally separated mechanisms devoted to acoustic memory-trace formation in different auditory processing stages of the human brain. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Formation of temporal-feature maps in the barn owl's auditory system

    Science.gov (United States)

    Kempter, Richard

    2000-03-01

    Computational maps are of central importance to the brain's representation of the outside world. The question of how maps are formed during ontogenetic development is a subject of intense research (Hubel & Wiesel, Proc R Soc B 198:1, 1977; Buonomano & Merzenich, Annu Rev Neurosci 21:149, 1998). The development in the primary visual cortex is in principle well explained compared to that in the auditory system, partly because the mechanisms underlying the formation of temporal-feature maps are hardly understood (Carr, Annu Rev Neurosci 16:223, 1993). Through a modelling study based on computer simulations in a system of spiking neurons a solution is offered to the problem of how a map of interaural time differences is set up in the nucleus laminaris of the barn owl, as a typical example. An array of neurons is able to represent interaural time differences in an orderly manner, viz., a map, if homosynaptic spike-based Hebbian learning (Gerstner et al, Nature 383:76, 1996; Kempter et al, Phys Rev E 59:4498, 1999) is combined with a presynaptic propagation of synaptic modifications (Fitzsimonds & Poo, Physiol Rev 78:143, 1998). The latter may be orders of magnitude weaker than the former. The algorithm is a key mechanism to the formation of temporal-feature maps on a submillisecond time scale.

  16. Ectopic external auditory canal and ossicular formation in the oculo-auriculo-vertebral spectrum

    Energy Technology Data Exchange (ETDEWEB)

    Supakul, Nucharin [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ramathibodi Hospital, Mahidol University, Department of Diagnostic and Therapeutic Radiology, Bangkok (Thailand); Kralik, Stephen F. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Ho, Chang Y. [Indiana University School of Medicine, Department of Radiology, Indianapolis, IN (United States); Riley Children' s Hospital, MRI Department, Indianapolis, IN (United States)

    2015-07-15

    Ear abnormalities in oculo-auricular-vertebral spectrum commonly present with varying degrees of external and middle ear atresias, usually in the expected locations of the temporal bone and associated soft tissues, without ectopia of the external auditory canal. We present the unique imaging of a 4-year-old girl with right hemifacial microsomia and ectopic location of an atretic external auditory canal, terminating in a hypoplastic temporomandibular joint containing bony structures with the appearance of auditory ossicles. This finding suggests an early embryological dysfunction involving Meckel's cartilage of the first branchial arch. (orig.)

  17. OASIS: progress on implementing the new stream format for containing data size explosion

    Science.gov (United States)

    Schulze, Steffen F.; Nakagawa, Kent H.; Buck, Peter D.

    2004-06-01

    The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using existing data formats specifications. The ITRS roadmap indicates that single layer MEBES files in 2004 exceed 200 GB threshold, worst case. OASIS, the new stream format developed under the sponsorship of SEMI, has been approved in the industry-wide voting in June 2003. The new format that on average will reduce the file size by an order of magnitude, enables to streamline data flows and provides increased efficiency in data exchange. The work to implement the new format into software tools is in progress. This paper gives an overview on the new format, reports results on data volume reduction and is a report on the status and benefits the new format can deliver. A data flow relying on OASIS as the input and transfer format is discussed.

  18. Cold streams in early massive hot haloes as the main mode of galaxy formation.

    Science.gov (United States)

    Dekel, A; Birnboim, Y; Engel, G; Freundlich, J; Goerdt, T; Mumcuoglu, M; Neistein, E; Pichon, C; Teyssier, R; Zinger, E

    2009-01-22

    Massive galaxies in the young Universe, ten billion years ago, formed stars at surprising intensities. Although this is commonly attributed to violent mergers, the properties of many of these galaxies are incompatible with such events, showing gas-rich, clumpy, extended rotating disks not dominated by spheroids. Cosmological simulations and clustering theory are used to explore how these galaxies acquired their gas. Here we report that they are 'stream-fed galaxies', formed from steady, narrow, cold gas streams that penetrate the shock-heated media of massive dark matter haloes. A comparison with the observed abundance of star-forming galaxies implies that most of the input gas must rapidly convert to stars. One-third of the stream mass is in gas clumps leading to mergers of mass ratio greater than 1:10, and the rest is in smoother flows. With a merger duty cycle of 0.1, three-quarters of the galaxies forming stars at a given rate are fed by smooth streams. The rarer, submillimetre galaxies that form stars even more intensely are largely merger-induced starbursts. Unlike destructive mergers, the streams are likely to keep the rotating disk configuration intact, although turbulent and broken into giant star-forming clumps that merge into a central spheroid. This stream-driven scenario for the formation of discs and spheroids is an alternative to the merger picture.

  19. On the age and formation mechanism of the core of the Quadrantid meteoroid stream

    Science.gov (United States)

    Abedin, Abedin; Spurný, Pavel; Wiegert, Paul; Pokorný, Petr; Borovička, Jiří; Brown, Peter

    2015-11-01

    The Quadrantid meteor shower is among the strongest annual meteor showers, and has drawn the attention of scientists for several decades. The stream is unusual, among others, for several reasons: its very short duration around maximum activity (≈12-14 h) as detected by visual, photographic and radar observations, its recent onset (around 1835 AD Quetelet, L.A.J. [1839]. Catalogue des principles apparitions d'etoiles filantes) and because it had been the only major stream without an obvious parent body until 2003. Ever since, there have been debates as to the age of the stream and the nature of its proposed parent body, asteroid 2003 EH1. In this work, we present results on the most probable age and formation mechanism of the narrow portion of the Quadrantid meteoroid stream. For the first time we use data on eight high precision photographic Quadrantids, equivalent to gram-kilogram size, to constrain the most likely age of the core of the stream. Out of eight high-precision photographic Quadrantids, five pertain directly to the narrow portion of the stream. In addition, we also use data on five high-precision radar Quadrantids, observed within the peak of the shower. We performed backwards numerical integrations of the equations of motion of a large number of 'clones' of both, the eight high-precision photographic and five radar Quadrantid meteors, along with the proposed parent body, 2003 EH1. According to our results, from the backward integrations, the most likely age of the narrow structure of the Quadrantids is between 200 and 300 years. These presumed ejection epochs, corresponding to 1700-1800 AD, are then used for forward integrations of large numbers of hypothetical meteoroids, ejected from the parent 2003 EH1, until the present epoch. The aim is to constrain whether the core of the Quadrantid meteoroid stream is consistent with a previously proposed relatively young age (≈200 years).

  20. Simulations of Early Baryonic Structure Formation with Stream Velocity: II. The Gas Fraction

    Energy Technology Data Exchange (ETDEWEB)

    Naoz, Smadar; Yoshida, Naoki; Gnedin, Nickolay Y.

    2012-12-28

    Understanding the gas content of high redshift halos is crucial for studying the formation of the first generation of galaxies and reionization. Recently, Tseliakhovich & Hirata showed that the relative "stream" velocity between the dark matter and baryons at the time of recombination - formally a second order effect, but an unusually large one - can influence the later structure formation history of the Universe. We quantify the effect of the stream velocity on the so-called "characteristic mass" - the minimum mass of a dark matter halo capable of retaining most of its baryons throughout its formation epoch - using three different high-resolution sets of cosmological simulations (with separate transfer functions for baryons and dark matter) that vary in box size, particle number, and the value of the relative velocity between the dark matter and baryons. In order to understand this effect theoretically, we generalize the linear theory filtering mass to properly account for the difference between the dark matter and baryonic density fluctuation evolution induced by the stream velocity. We show that the new filtering mass provides an accurate estimate for the characteristic mass, while other theoretical ansatzes for the characteristic mass are substantially less precise.

  1. The encoding of auditory objects in auditory cortex: insights from magnetoencephalography.

    Science.gov (United States)

    Simon, Jonathan Z

    2015-02-01

    Auditory objects, like their visual counterparts, are perceptually defined constructs, but nevertheless must arise from underlying neural circuitry. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects listening to complex auditory scenes, we review studies that demonstrate that auditory objects are indeed neurally represented in auditory cortex. The studies use neural responses obtained from different experiments in which subjects selectively listen to one of two competing auditory streams embedded in a variety of auditory scenes. The auditory streams overlap spatially and often spectrally. In particular, the studies demonstrate that selective attentional gain does not act globally on the entire auditory scene, but rather acts differentially on the separate auditory streams. This stream-based attentional gain is then used as a tool to individually analyze the different neural representations of the competing auditory streams. The neural representation of the attended stream, located in posterior auditory cortex, dominates the neural responses. Critically, when the intensities of the attended and background streams are separately varied over a wide intensity range, the neural representation of the attended speech adapts only to the intensity of that speaker, irrespective of the intensity of the background speaker. This demonstrates object-level intensity gain control in addition to the above object-level selective attentional gain. Overall, these results indicate that concurrently streaming auditory objects, even if spectrally overlapping and not resolvable at the auditory periphery, are individually neurally encoded in auditory cortex, as separate objects. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. What Is the deficit in Phonological Processing Deficits: Auditory Sensitivity, Masking, or Category Formation?

    Science.gov (United States)

    Nittrouer, Susan; Shune, Samantha; Lowenstein, Joanna H.

    2011-01-01

    Although children with language impairments, including those associated with reading, usually demonstrate deficits in phonological processing, there is minimal agreement as to the source of those deficits. This study examined two problems hypothesized to be possible sources: either poor auditory sensitivity to speech-relevant acoustic properties,…

  3. Perceptual grouping over time within and across auditory and tactile modalities.

    Directory of Open Access Journals (Sweden)

    I-Fan Lin

    Full Text Available In auditory scene analysis, population separation and temporal coherence have been proposed to explain how auditory features are grouped together and streamed over time. The present study investigated whether these two theories can be applied to tactile streaming and whether temporal coherence theory can be applied to crossmodal streaming. The results show that synchrony detection between two tones/taps at different frequencies/locations became difficult when one of the tones/taps was embedded in a perceptual stream. While the taps applied to the same location were streamed over time, the taps applied to different locations were not. This observation suggests that tactile stream formation can be explained by population-separation theory. On the other hand, temporally coherent auditory stimuli at different frequencies were streamed over time, but temporally coherent tactile stimuli applied to different locations were not. When there was within-modality streaming, temporally coherent auditory stimuli and tactile stimuli were not streamed over time, either. This observation suggests the limitation of temporal coherence theory when it is applied to perceptual grouping over time.

  4. BINEX as a Format for Near-Real Time GNSS and Other Data Streams

    Science.gov (United States)

    Estey, L.; Mencin, D.

    2008-12-01

    BINEX, for "BINary Exchange", is an open and operational binary format for GNSS data. It has been available as a format option on several different GPS receivers from several manufacturers starting with Ashtech's microZ in 2000, and has evolved to support GNSS data on Trimble's receivers. The data structures are very compact and are organized in epoch-by-epoch records which do not rely on any prior records for decoding. Typically, only a few hundred bytes per epoch are needed to store the L1 and L2 phase and code pseudoranges (both to 1mm resolution), CNo measurements (to 0.1 dBHz resolution), loss-of-lock flags, and so on. Ancillary site data, such as meteorological observations, can also be stored as BINEX records. Each BINEX record also identifies whether it is of little-endian or big-endian construction, so that BINEX creation can be optimized by processor type in a GNSS receiver or later construction by computer. Each BINEX record also has a scaled checksum or CRC of 1-16 bytes, dependent on record length. The Plate Boundary Observatory is currently using near-real time BINEX streams from Trimble NetRS receivers as a means of outputting various ancillary site data. For example, meteorologic data, pore pressure, borehole tilt, and so on can be monitored by multiple serial I/O on the NetRS and these port queries bundled as BINEX records are directed to one or more BINEX output streams, in addition to the primary GPS data epochs. Users can tap into which ever stream meets their need. In addition, the BINEX records are stored in the NetRS in session files for later retrieval in case of real-time data loss in the transmitted streams.

  5. Modeling wood dynamics, jam formation, and sediment storage in a gravel-bed stream

    Science.gov (United States)

    Eaton, B. C.; Hassan, M. A.; Davidson, S. L.

    2012-12-01

    In small and intermediate sized streams, the interaction between wood and bed material transport often determines the nature of the physical habitat, which in turn influences the health of the stream's ecosystem. We present a stochastic model that can be used to simulate the effects on physical habitat of forest fires, climate change, and other environmental disturbances that alter wood recruitment. The model predicts large wood (LW) loads in a stream as well as the volume of sediment stored by the wood; while it is parameterized to describe gravel bed streams similar to a well-studied field prototype, Fishtrap Creek, British Columbia, it can be calibrated to other systems as well. In the model, LW pieces are produced and modified over time as a result of random tree-fall, LW breakage, LW movement, and piece interaction to form LW jams. Each LW piece traps a portion of the annual bed material transport entering the reach and releases the stored sediment when the LW piece is entrained and moved. The equations governing sediment storage are based on a set of flume experiments also scaled to the field prototype. The model predicts wood loads ranging from 70 m3/ha to more than 300 m3/ha, with a mean value of 178 m3/ha: both the range and the mean value are consistent with field data from streams with similar riparian forest types and climate. The model also predicts an LW jam spacing that is consistent with field data. Furthermore, our modeling results demonstrate that the high spatial and temporal variability in sediment storage, sediment transport, and channel morphology associated with LW-dominated streams occurs only when LW pieces interact and form jams. Model runs that do not include jam formation are much less variable. These results suggest that river restoration efforts using engineered LW pieces that are fixed in place and not permitted to interact will be less successful at restoring the geomorphic processes responsible for producing diverse, productive

  6. Finding your mate at a cocktail party: frequency separation promotes auditory stream segregation of concurrent voices in multi-species frog choruses.

    Directory of Open Access Journals (Sweden)

    Vivek Nityananda

    Full Text Available Vocal communication in crowded social environments is a difficult problem for both humans and nonhuman animals. Yet many important social behaviors require listeners to detect, recognize, and discriminate among signals in a complex acoustic milieu comprising the overlapping signals of multiple individuals, often of multiple species. Humans exploit a relatively small number of acoustic cues to segregate overlapping voices (as well as other mixtures of concurrent sounds, like polyphonic music. By comparison, we know little about how nonhuman animals are adapted to solve similar communication problems. One important cue enabling source segregation in human speech communication is that of frequency separation between concurrent voices: differences in frequency promote perceptual segregation of overlapping voices into separate "auditory streams" that can be followed through time. In this study, we show that frequency separation (ΔF also enables frogs to segregate concurrent vocalizations, such as those routinely encountered in mixed-species breeding choruses. We presented female gray treefrogs (Hyla chrysoscelis with a pulsed target signal (simulating an attractive conspecific call in the presence of a continuous stream of distractor pulses (simulating an overlapping, unattractive heterospecific call. When the ΔF between target and distractor was small (e.g., ≤3 semitones, females exhibited low levels of responsiveness, indicating a failure to recognize the target as an attractive signal when the distractor had a similar frequency. Subjects became increasingly more responsive to the target, as indicated by shorter latencies for phonotaxis, as the ΔF between target and distractor increased (e.g., ΔF = 6-12 semitones. These results support the conclusion that gray treefrogs, like humans, can exploit frequency separation as a perceptual cue to segregate concurrent voices in noisy social environments. The ability of these frogs to segregate

  7. Auditory pathways: anatomy and physiology.

    Science.gov (United States)

    Pickles, James O

    2015-01-01

    This chapter outlines the anatomy and physiology of the auditory pathways. After a brief analysis of the external, middle ears, and cochlea, the responses of auditory nerve fibers are described. The central nervous system is analyzed in more detail. A scheme is provided to help understand the complex and multiple auditory pathways running through the brainstem. The multiple pathways are based on the need to preserve accurate timing while extracting complex spectral patterns in the auditory input. The auditory nerve fibers branch to give two pathways, a ventral sound-localizing stream, and a dorsal mainly pattern recognition stream, which innervate the different divisions of the cochlear nucleus. The outputs of the two streams, with their two types of analysis, are progressively combined in the inferior colliculus and onwards, to produce the representation of what can be called the "auditory objects" in the external world. The progressive extraction of critical features in the auditory stimulus in the different levels of the central auditory system, from cochlear nucleus to auditory cortex, is described. In addition, the auditory centrifugal system, running from cortex in multiple stages to the organ of Corti of the cochlea, is described. © 2015 Elsevier B.V. All rights reserved.

  8. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech

    Science.gov (United States)

    Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas

    2017-06-01

    Objective. Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening (‘cocktail party’) scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. Approach. To investigate whether a listener’s attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal (‘in-Ear-EEG’) and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n  =  7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Main results. Each individual participants’ attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. Significance. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener’s focus of attention.

  9. Single-channel in-ear-EEG detects the focus of auditory attention to concurrent tone streams and mixed speech.

    Science.gov (United States)

    Fiedler, Lorenz; Wöstmann, Malte; Graversen, Carina; Brandmeyer, Alex; Lunner, Thomas; Obleser, Jonas

    2017-06-01

    Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n  =  7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.

  10. Formation of a low-crystalline Zn-silicate in a stream in SW Sardinia, Italy

    Science.gov (United States)

    Wanty, Richard B.; De Giudici, G.; Onnis, P.; Rutherford, D.; Kimball, B.A.; Podda, F.; Cidu, R.; Lattanzi, P.; Medas, D.

    2013-01-01

    n southwestern Sardinia, Italy, the Rio Naracauli drains a catchment that includes several abandoned mines. The drainage from the mines and associated waste rocks has led to extreme concentrations of dissolved Zn, but because of the near-neutral pH, concentrations of other metals remain low. In the reach from approximately 2300 to 3000 m downstream from the headwaters area, an amorphous Zn-silicate precipitates from the water. In this reach, concentrations of both Zn and silica remain nearly constant, but the loads (measured in mass/time) of both increase, suggesting that new Zn and silica are supplied to the stream, likely from emerging groundwater. Zinc isotope signatures of the solid are heavier than the dissolved Zn by about 0.5 permil in 66/64Zn, suggesting that an extracellular biologically mediated adsorption process may be involved in the formation of the Zn-silicate.

  11. Auditory Display

    DEFF Research Database (Denmark)

    volume. The conference's topics include auditory exploration of data via sonification and audification; real time monitoring of multivariate date; sound in immersive interfaces and teleoperation; perceptual issues in auditory display; sound in generalized computer interfaces; technologies supporting...... auditory display creation; data handling for auditory display systems; applications of auditory display....

  12. A multi-grain reduced-complexity model for step formation and stability in steep streams

    Science.gov (United States)

    Saletti, Matteo; Molnar, Peter; Turowski, Jens; Rickenmann, Dieter

    2017-04-01

    We present a multi-grain particle-based reduced-complexity model for the simulation of the formation and stability of step-pool morphology by specifically considering the granular interactions between sediment and river bed leading to entrainment and deposition of grains. The model CAST2 (Cellular Automaton Sediment Transport), based on the uniform-size model of Saletti et al. [2016], contains phenomenological parameterizations of sediment supply, bed load transport, particle entrainment and deposition, and granular interactions in a cellular-automaton space. CAST2 simulates the effect of different grain sizes by considering two types of particles: fine grains, which can be mobilized by any flow, and coarse grains, whose mobility is flow-dependent. The model has been applied to test the effect of granular forces on step formation and stability in step-pool channels, as hypothesized in the jammed-state framework by Church and Zimmermann [2007]. The jamming of particles in motion and their enhanced stability on the bed are modelled explicitely: in this way steps are effectively generated during high-flow periods and they are stable during low flows when sediment supply is small. Moreover, model results are used to show which are the fundamental processes required to produce and maintain steps in steep streams and these findings are consistent with field observations. Finally the effect of flood frequency on step density is investigated by means of long stochastic simulations with repeated flood events. Model results show that systems with high flood frequency are characterized by greater step density, due to the dominance of step-forming conditions. Our results show the potential of reduced-complexity models as learning tools to gain new insight into the complex feedbacks and poorly understood processes characterizing rapidly changing geomorphic systems like step-pool streams, pointing out the importance of granular effects on the formation and stability of the step

  13. Shock formation and structure in magnetic reconnection with a streaming flow.

    Science.gov (United States)

    Wu, Liangneng; Ma, Zhiwei; Zhang, Haowei

    2017-08-18

    The features of magnetic reconnection with a streaming flow have been investigated on the basis of compressible resistive magnetohydrodynamic (MHD) model. The super-Alfvenic streaming flow largely enhances magnetic reconnection. The maximum reconnection rate is almost four times larger with super-Alfvenic streaming flow than sub-Alfvénic streaming flow. In the nonlinear stage, it is found that there is a pair of shocks observed in the inflow region, which are manifested to be slow shocks for sub-Alfvénic streaming flow, and fast shocks for super-Alfvénic streaming flow. The quasi-period oscillation of reconnection rates in the decaying phase for super-Alfvénic streaming flow is resulted from the different drifting velocities of the shock and the X point.

  14. Formation of the Andromeda giant stream: asymmetric structure and disc progenitor

    Science.gov (United States)

    Kirihara, T.; Miki, Y.; Mori, M.; Kawaguchi, T.; Rich, R. M.

    2017-01-01

    We focus on the evidence of a past minor merger discovered in the halo of the Andromeda galaxy (M31). Previous N-body studies have enjoyed moderate success in producing the observed giant stellar stream (GSS) and stellar shells in M31's halo. The observed distribution of stars in the halo of M31 shows an asymmetric surface brightness profile across the GSS; however, the effect of the morphology of the progenitor galaxy on the internal structure of the GSS requires further investigation in theoretical studies. To investigate the physical connection between the characteristic surface brightness in the GSS and the morphology of the progenitor dwarf galaxy, we systematically vary the thickness, rotation velocity and initial inclination of the disc dwarf galaxy in N-body simulations. The formation of the observed structures appears to be dominated by the progenitor's rotation. Besides reproducing the observed GSS and two shells in detail, we predict additional structures for further observations. We predict the detectability of the progenitor's stellar core in the phase-space density distribution, azimuthal metallicity gradient of the western shell-like structure and an additional extended shell in the north-western direction that may constrain the properties of the progenitor galaxy.

  15. The interaction of acoustic and linguistic grouping cues in auditory object formation

    Science.gov (United States)

    Shapley, Kathy; Carrell, Thomas

    2005-09-01

    One of the earliest explanations for good speech intelligibility in poor listening situations was context [Miller et al., J. Exp. Psychol. 41 (1951)]. Context presumably allows listeners to group and predict speech appropriately and is known as a top-down listening strategy. Amplitude comodulation is another mechanism that has been shown to improve sentence intelligibility. Amplitude comodulation provides acoustic grouping information without changing the linguistic content of the desired signal [Carrell and Opie, Percept. Psychophys. 52 (1992); Hu and Wang, Proceedings of ICASSP-02 (2002)] and is considered a bottom-up process. The present experiment investigated how amplitude comodulation and semantic information combined to improve speech intelligibility. Sentences with high- and low-predictability word sequences [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84 (1988)] were constructed in two different formats: time-varying sinusoidal sentences (TVS) and reduced-channel sentences (RC). The stimuli were chosen because they minimally represent the traditionally defined speech cues and therefore emphasized the importance of the high-level context effects and low-level acoustic grouping cues. Results indicated that semantic information did not influence intelligibility levels of TVS and RC sentences. In addition amplitude modulation aided listeners' intelligibility scores in the TVS condition but hindered listeners' intelligibility scores in the RC condition.

  16. Numerical Simulation of Bubble Formation and Transport in Cross-Flowing Streams

    Directory of Open Access Journals (Sweden)

    Yanneck Wielhorski

    2014-09-01

    Full Text Available Numerical simulations on confined bubble trains formed by cross-flowing streams are carried out with the numerical code THETIS which is based on the Volume of Fluid (VOF method and has been developed for two phase flow studies and especially for a gas-liquid system. The surface tension force, which needs particular attention in order to determine the shape of the interface accurately, is computed using the Continuum Surface Force model (CSF. Through the coupling of a VOF-PLIC technique (Piecewise-Linear Interface Calculation and a smoothing function of adjustable thickness, the Smooth Volume of Fluid technique (SVOF is intended to capture accurately strong interface distortion, rupture or reconnection with large density and viscosity contrasts between phases. This approach is extended by using the regular VOF-PLIC technique, while applying a smoothing procedure affecting both physical characteristics averaging and surface tension modeling. The front-capturing strategy is extended to gas injection. We begin by introducing the main physical phenomena occurring during bubble formation in microfluidic systems. Then, an experimental study performed in a cylindrical T-junction for different wetting behaviors is presented. For the wetting configuration, Cartesian 2D numerical simulations concerning the gas-liquid bubble production performed in a T-junction with rectangular, planar cross sections are shown and compared with experimental measurements. Finally, the results obtained of bubble break-up mechanism, shape, transport and pressure drop along the channel will be presented, discussed and compared to some experimental and numerical outcomes given in the literature.

  17. Collisions with meteoroid streams as one possible mechanism for the formation of hyperbolic cometary orbits

    Science.gov (United States)

    Guliyev, Ayyub; Nabiyev, Shaig

    2017-07-01

    This paper presents the results of a statistical analysis of the dynamic parameters of 300 comets that have osculating hyperbolic orbits. It is shown that such comets differ from other comets by their large perihelion distances and by a predominance of retrograde motion. It is shown that the values of i, the inclination of the hyperbolic comets, are in comparative excess over the interval 90-120°. The dominance by q, the perihelion distance, renders it difficult to suggest that the excess hyperbolic velocity of these comets can be the result of physical processes that take place in their nuclei. Aspects of the following working hypothesis, that the hyperbolic excess of parameter e might be formed after comets pass through meteoroid streams, are also studied. To evaluate this hypothesis, the distribution of the orbits of hyperbolic comets relative to the plane of motion of 112 established meteoroid streams are analyzed. The number (N) of orbit nodes for hyperbolic comets with respect to the plane of each stream at various distances is calculated. To determine the degree of redundancy of N, a special computing algorithm was applied that provided the expected value nav as well as the standard deviation σ for the number of cometary nodes at the plane of each stream. A comparative analysis of the N and nav values that take σ into account suggests an excess in 40 stream cases. This implies that the passage of comets through meteoroid streams can lead to an acceleration of the comets' heliocentric velocity.

  18. Auditory Spatial Layout

    Science.gov (United States)

    Wightman, Frederic L.; Jenison, Rick

    1995-01-01

    All auditory sensory information is packaged in a pair of acoustical pressure waveforms, one at each ear. While there is obvious structure in these waveforms, that structure (temporal and spectral patterns) bears no simple relationship to the structure of the environmental objects that produced them. The properties of auditory objects and their layout in space must be derived completely from higher level processing of the peripheral input. This chapter begins with a discussion of the peculiarities of acoustical stimuli and how they are received by the human auditory system. A distinction is made between the ambient sound field and the effective stimulus to differentiate the perceptual distinctions among various simple classes of sound sources (ambient field) from the known perceptual consequences of the linear transformations of the sound wave from source to receiver (effective stimulus). Next, the definition of an auditory object is dealt with, specifically the question of how the various components of a sound stream become segregated into distinct auditory objects. The remainder of the chapter focuses on issues related to the spatial layout of auditory objects, both stationary and moving.

  19. Inhibition of mTOR by Rapamycin Results in Auditory Hair Cell Damage and Decreased Spiral Ganglion Neuron Outgrowth and Neurite Formation In Vitro

    Directory of Open Access Journals (Sweden)

    Katharina Leitmeyer

    2015-01-01

    Full Text Available Rapamycin is an antifungal agent with immunosuppressive properties. Rapamycin inhibits the mammalian target of rapamycin (mTOR by blocking the mTOR complex 1 (mTORC1. mTOR is an atypical serine/threonine protein kinase, which controls cell growth, cell proliferation, and cell metabolism. However, less is known about the mTOR pathway in the inner ear. First, we evaluated whether or not the two mTOR complexes (mTORC1 and mTORC2, resp. are present in the mammalian cochlea. Next, tissue explants of 5-day-old rats were treated with increasing concentrations of rapamycin to explore the effects of rapamycin on auditory hair cells and spiral ganglion neurons. Auditory hair cell survival, spiral ganglion neuron number, length of neurites, and neuronal survival were analyzed in vitro. Our data indicates that both mTOR complexes are expressed in the mammalian cochlea. We observed that inhibition of mTOR by rapamycin results in a dose dependent damage of auditory hair cells. Moreover, spiral ganglion neurite number and length of neurites were significantly decreased in all concentrations used compared to control in a dose dependent manner. Our data indicate that the mTOR may play a role in the survival of hair cells and modulates spiral ganglion neuronal outgrowth and neurite formation.

  20. Ultrashort electromagnetic clusters formation by two-stream superheterodyne free electron lasers

    DEFF Research Database (Denmark)

    Kulish, Viktor V.; Lysenko, Alexander V.; Volk, Iurii I.

    2016-01-01

    A cubic nonlinear self-consistent theory of multiharmonic two-stream superheterodyne free electron lasers (TSFEL) of a klystron type, intended to form powerful ultrashort clusters of an electromagnetic field is constructed. Plural three-wave parametric resonant interactions of wave harmonics have...

  1. Auditory agnosia.

    Science.gov (United States)

    Slevc, L Robert; Shell, Alison R

    2015-01-01

    Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition. © 2015 Elsevier B.V. All rights reserved.

  2. Functional Mapping of the Human Auditory Cortex: fMRI Investigation of a Patient with Auditory Agnosia from Trauma to the Inferior Colliculus.

    Science.gov (United States)

    Poliva, Oren; Bestelmeyer, Patricia E G; Hall, Michelle; Bultitude, Janet H; Koller, Kristin; Rafal, Robert D

    2015-09-01

    To use functional magnetic resonance imaging to map the auditory cortical fields that are activated, or nonreactive, to sounds in patient M.L., who has auditory agnosia caused by trauma to the inferior colliculi. The patient cannot recognize speech or environmental sounds. Her discrimination is greatly facilitated by context and visibility of the speaker's facial movements, and under forced-choice testing. Her auditory temporal resolution is severely compromised. Her discrimination is more impaired for words differing in voice onset time than place of articulation. Words presented to her right ear are extinguished with dichotic presentation; auditory stimuli in the right hemifield are mislocalized to the left. We used functional magnetic resonance imaging to examine cortical activations to different categories of meaningful sounds embedded in a block design. Sounds activated the caudal sub-area of M.L.'s primary auditory cortex (hA1) bilaterally and her right posterior superior temporal gyrus (auditory dorsal stream), but not the rostral sub-area (hR) of her primary auditory cortex or the anterior superior temporal gyrus in either hemisphere (auditory ventral stream). Auditory agnosia reflects dysfunction of the auditory ventral stream. The ventral and dorsal auditory streams are already segregated as early as the primary auditory cortex, with the ventral stream projecting from hR and the dorsal stream from hA1. M.L.'s leftward localization bias, preserved audiovisual integration, and phoneme perception are explained by preserved processing in her right auditory dorsal stream.

  3. Seeing the song: left auditory structures may track auditory-visual dynamic alignment.

    Directory of Open Access Journals (Sweden)

    Julia A Mossbridge

    Full Text Available Auditory and visual signals generated by a single source tend to be temporally correlated, such as the synchronous sounds of footsteps and the limb movements of a walker. Continuous tracking and comparison of the dynamics of auditory-visual streams is thus useful for the perceptual binding of information arising from a common source. Although language-related mechanisms have been implicated in the tracking of speech-related auditory-visual signals (e.g., speech sounds and lip movements, it is not well known what sensory mechanisms generally track ongoing auditory-visual synchrony for non-speech signals in a complex auditory-visual environment. To begin to address this question, we used music and visual displays that varied in the dynamics of multiple features (e.g., auditory loudness and pitch; visual luminance, color, size, motion, and organization across multiple time scales. Auditory activity (monitored using auditory steady-state responses, ASSR was selectively reduced in the left hemisphere when the music and dynamic visual displays were temporally misaligned. Importantly, ASSR was not affected when attentional engagement with the music was reduced, or when visual displays presented dynamics clearly dissimilar to the music. These results appear to suggest that left-lateralized auditory mechanisms are sensitive to auditory-visual temporal alignment, but perhaps only when the dynamics of auditory and visual streams are similar. These mechanisms may contribute to correct auditory-visual binding in a busy sensory environment.

  4. Live-streaming: Time-lapse video evidence of novel streamer formation mechanism and varying viscosity.

    Science.gov (United States)

    Parvinzadeh Gashti, Mazeyar; Bellavance, Julien; Kroukamp, Otini; Wolfaardt, Gideon; Taghavi, Seyed Mohammad; Greener, Jesse

    2015-07-01

    Time-lapse videos of growing biofilms were analyzed using a background subtraction method, which removed camouflaging effects from the heterogeneous field of view to reveal evidence of streamer formation from optically dense biofilm segments. In addition, quantitative measurements of biofilm velocity and optical density, combined with mathematical modeling, demonstrated that streamer formation occurred from mature, high-viscosity biofilms. We propose a streamer formation mechanism by sudden partial detachment, as opposed to continuous elongation as observed in other microfluidic studies. Additionally, streamer formation occurred in straight microchannels, as opposed to serpentine or pseudo-porous channels, as previously reported.

  5. The effect of pH on thiosulfate formation in a biotechnological process for the removal of hydrogen sulfide from gas streams

    NARCIS (Netherlands)

    Bosch, van den P.L.F.; Sorokin, D.Y.; Buisman, C.J.N.; Janssen, A.J.H.

    2008-01-01

    In a biotechnological process for hydrogen sulfide (H2S) removal from gas streams, operating at natronophilic conditions, formation of thiosulfate (S2O32¿) is unfavorable, as it leads to a reduced sulfur production. Thiosulfate formation was studied in gas-lift bioreactors, using natronophilic

  6. Foam formation in a biotechnological process for the removal of hydrogen sulfide from gas streams

    NARCIS (Netherlands)

    Kleinjan, W.E.; Marcelis, C.L.M.; Keizer, de A.; Janssen, A.J.H.; Cohen Stuart, M.A.

    2006-01-01

    Foam formation in aqueous suspensions of biologically produced sulfur is studied in a foam generator at 30°C, with the objective of describing trends and phenomena that govern foam formation in a biotechnological hydrogen sulfide removal process. Air is bubbled through a suspension and the

  7. Music Radio as a Format Remediated for the Stream-Based Music Use

    DEFF Research Database (Denmark)

    Ægidius, Andreas Lenander

    . With this, I aim to investigate what a theory of radio as a digital format could add to the further studies of the socio-cultural, practical, and material multiplicities of music radio. My aim is also to further develop the theoretical notion of the format within media studies and sound studies. I propose...

  8. Applying a Hydrodynamical Treatment of Stream Flow and Accretion Disk Formation in WASP-12/b Exoplanetary System

    Science.gov (United States)

    Weaver, Ian; Lopez, Aaron; Macias, Phil

    2016-01-01

    WASP-12b is a hot Jupiter orbiting dangerously close to its parent star WASP-12 at a radius 1/44th the distance between the Earth and the Sun, or roughly 16 times closer than Mercury. WASP-12's gravitational influence at this incredibly close proximity generates tidal forces on WASP-12b that distort the planet into an egg-like shape. As a result, the planet's surface overflows its Roche lobe through L1, transferring mass to the host star at a rate of 270 million metric tonnes per second. This mass transferring stream forms an accretion disk that transits the parent star, which aids sensitive instruments, such as the Kepler spacecraft, whose role is to examine the periodic dimming of main sequence stars in order to detect ones with orbiting planets. The quasi-ballistic stream trajectory is approximated by that of a massless point particle released from analogous initial conditions in 2D. The particle dynamics are shown to deviate negligibly across a broad range of initial conditions, indicating applicability of our model to "WASP-like" systems in general. We then apply a comprehensive fluid treatment by way of hydrodynamical code FLASH in order to directly model the behavior of mass transfer in a non-inertial reference frame and subsequent disk formation. We hope to employ this model to generate virtual spectroscopic signatures and compare them against collected light curve data from the Hubble Space Telescope's Cosmic Origins Spectrograph (COS).

  9. Supersonic gas streams enhance the formation of massive black holes in the early universe.

    Science.gov (United States)

    Hirano, Shingo; Hosokawa, Takashi; Yoshida, Naoki; Kuiper, Rolf

    2017-09-29

    The origin of super-massive black holes in the early universe remains poorly understood. Gravitational collapse of a massive primordial gas cloud is a promising initial process, but theoretical studies have difficulty growing the black hole fast enough. We report numerical simulations of early black hole formation starting from realistic cosmological conditions. Supersonic gas motions left over from the Big Bang prevent early gas cloud formation until rapid gas condensation is triggered in a protogalactic halo. A protostar is formed in the dense, turbulent gas cloud, and it grows by sporadic mass accretion until it acquires 34,000 solar masses. The massive star ends its life with a catastrophic collapse to leave a black hole-a promising seed for the formation of a monstrous black hole. Copyright © 2017 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  10. Supersonic gas streams enhance the formation of massive black holes in the early universe

    Science.gov (United States)

    Hirano, Shingo; Hosokawa, Takashi; Yoshida, Naoki; Kuiper, Rolf

    2017-09-01

    The origin of super-massive black holes in the early universe remains poorly understood. Gravitational collapse of a massive primordial gas cloud is a promising initial process, but theoretical studies have difficulty growing the black hole fast enough. We report numerical simulations of early black hole formation starting from realistic cosmological conditions. Supersonic gas motions left over from the Big Bang prevent early gas cloud formation until rapid gas condensation is triggered in a protogalactic halo. A protostar is formed in the dense, turbulent gas cloud, and it grows by sporadic mass accretion until it acquires 34,000 solar masses. The massive star ends its life with a catastrophic collapse to leave a black hole—a promising seed for the formation of a monstrous black hole.

  11. Highly efficient star formation in NGC 5253 possibly from stream-fed accretion.

    Science.gov (United States)

    Turner, J L; Beck, S C; Benford, D J; Consiglio, S M; Ho, P T P; Kovács, A; Meier, D S; Zhao, J-H

    2015-03-19

    Gas clouds in present-day galaxies are inefficient at forming stars. Low star-formation efficiency is a critical parameter in galaxy evolution: it is why stars are still forming nearly 14 billion years after the Big Bang and why star clusters generally do not survive their births, instead dispersing to form galactic disks or bulges. Yet the existence of ancient massive bound star clusters (globular clusters) in the Milky Way suggests that efficiencies were higher when they formed ten billion years ago. A local dwarf galaxy, NGC 5253, has a young star cluster that provides an example of highly efficient star formation. Here we report the detection of the J = 3→2 rotational transition of CO at the location of the massive cluster. The gas cloud is hot, dense, quiescent and extremely dusty. Its gas-to-dust ratio is lower than the Galactic value, which we attribute to dust enrichment by the embedded star cluster. Its star-formation efficiency exceeds 50 per cent, tenfold that of clouds in the Milky Way. We suggest that high efficiency results from the force-feeding of star formation by a streamer of gas falling into the galaxy.

  12. Neural correlates of auditory scale illusion.

    Science.gov (United States)

    Kuriki, Shinya; Numao, Ryousuke; Nemoto, Iku

    2016-09-01

    The auditory illusory perception "scale illusion" occurs when ascending and descending musical scale tones are delivered in a dichotic manner, such that the higher or lower tone at each instant is presented alternately to the right and left ears. Resulting tone sequences have a zigzag pitch in one ear and the reversed (zagzig) pitch in the other ear. Most listeners hear illusory smooth pitch sequences of up-down and down-up streams in the two ears separated in higher and lower halves of the scale. Although many behavioral studies have been conducted, how and where in the brain the illusory percept is formed have not been elucidated. In this study, we conducted functional magnetic resonance imaging using sequential tones that induced scale illusion (ILL) and those that mimicked the percept of scale illusion (PCP), and we compared the activation responses evoked by those stimuli by region-of-interest analysis. We examined the effects of adaptation, i.e., the attenuation of response that occurs when close-frequency sounds are repeated, which might interfere with the changes in activation by the illusion process. Results of the activation difference of the two stimuli, measured at varied tempi of tone presentation, in the superior temporal auditory cortex were not explained by adaptation. Instead, excess activation of the ILL stimulus from the PCP stimulus at moderate tempi (83 and 126 bpm) was significant in the posterior auditory cortex with rightward superiority, while significant prefrontal activation was dominant at the highest tempo (245 bpm). We suggest that the area of the planum temporale posterior to the primary auditory cortex is mainly involved in the illusion formation, and that the illusion-related process is strongly dependent on the rate of tone presentation. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex.SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  14. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  15. Functional imaging of auditory scene analysis.

    Science.gov (United States)

    Gutschalk, Alexander; Dykstra, Andrew R

    2014-01-01

    Our auditory system is constantly faced with the task of decomposing the complex mixture of sound arriving at the ears into perceptually independent streams constituting accurate representations of individual sound sources. This decomposition, termed auditory scene analysis, is critical for both survival and communication, and is thought to underlie both speech and music perception. The neural underpinnings of auditory scene analysis have been studied utilizing invasive experiments with animal models as well as non-invasive (MEG, EEG, and fMRI) and invasive (intracranial EEG) studies conducted with human listeners. The present article reviews human neurophysiological research investigating the neural basis of auditory scene analysis, with emphasis on two classical paradigms termed streaming and informational masking. Other paradigms - such as the continuity illusion, mistuned harmonics, and multi-speaker environments - are briefly addressed thereafter. We conclude by discussing the emerging evidence for the role of auditory cortex in remapping incoming acoustic signals into a perceptual representation of auditory streams, which are then available for selective attention and further conscious processing. This article is part of a Special Issue entitled Human Auditory Neuroimaging. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Post-flare formation of the accretion stream and a dip in pulse profiles of LMC X-4

    Science.gov (United States)

    Beri, Aru; Paul, Biswajit

    2017-10-01

    We report here a pulse profile evolution study of an accreting X-ray pulsar LMC X-4 during and after the large X-ray flares using data from the two observatories XMM-Newton and RXTE. During the flares, the pulse profiles were found to have a significant phase offset in the range of 0.2-0.5 compared to the pulse profiles immediately before or after the flare. Investigating the pulse profiles for about 105 s after the flares, it was found that it takes about 2000-4000 s for the modified accretion column to return to its normal structure and formation of an accretion stream that causes a dip in the pulse profile of LMC X-4. We have also carried out pulse phase resolved spectroscopy of LMC X-4 in narrow phase bins using data from EPIC-pn and spectroscopically confirmed the pulsating nature of the soft spectral component, having a pulse fraction and phase different from that of the power-law component.

  17. Auditory learning: a developmental method.

    Science.gov (United States)

    Zhang, Yilu; Weng, Juyang; Hwang, Wey-Shiuan

    2005-05-01

    Motivated by the human autonomous development process from infancy to adulthood, we have built a robot that develops its cognitive and behavioral skills through real-time interactions with the environment. We call such a robot a developmental robot. In this paper, we present the theory and the architecture to implement a developmental robot and discuss the related techniques that address an array of challenging technical issues. As an application, experimental results on a real robot, self-organizing, autonomous, incremental learner (SAIL), are presented with emphasis on its audition perception and audition-related action generation. In particular, the SAIL robot conducts the auditory learning from unsegmented and unlabeled speech streams without any prior knowledge about the auditory signals, such as the designated language or the phoneme models. Neither available before learning starts are the actions that the robot is expected to perform. SAIL learns the auditory commands and the desired actions from physical contacts with the environment including the trainers.

  18. Diagnostics of basal conditions - the formation of extensive zones of surface ribs in ice-sheets and streams

    Science.gov (United States)

    Hindmarsh, Richard C. A.; Sergienko, Olga V.; Creyts, Timothy T.

    2015-04-01

    Most if not all current predictions of the evolution of ice-streams to changes induced by global change assume static basal conditions. This is a result of current restrictions in the remote sensing of the ice-sheet basal physical environment, which cannot resolve the small-scale phenomena believed to control the basal traction. The search therefore is on for observable structures or features that are the result of the operation of basal processes. Any successful theory of ice-sheet basal processes would need to be able to explain such phenomena associated with or caused by special properties of the basal environment. We present one class of these phenomena, and also present tentative hypotheses as to their formation. Using recent high-resolution observations of the Antarctic and Greenland ice sheets topography, the computed driving stress and the inferred basal traction reveal broad-scale organization in 5-20 km band-like patterns in both quantities. The similarity of patterns on the Greenland and Antarctic ice sheets suggests that the flow of ice sheets is controlled by the same fundamental processes operating at their base, which control ice sheet sliding and are highly variable on relatively short spatial and temporal scales. The formation mechanism for these bands contains information about the operation of the sub-glacial system. There are three possible, non-exclusive causes of these ribs which we examine from a theoretical and evidential point-of-view (i) They are the surface response to similar bands in the basal topography, whose regularity would equally require an explanation in terms of basal processes. (ii) They are translating surface waves in the ice, supported by membrane stress gradients rather than by gradients in the basal resistance. (iii) The ribs are due to the development of a band-like structure in the basal shear stress distribution that is the result of a pattern-forming instability in sub-glacial till and water flow, perhaps related to

  19. Distinct effects of perceptual quality on auditory word recognition, memory formation and recall in a neural model of sequential memory

    Directory of Open Access Journals (Sweden)

    Paul Miller

    2010-06-01

    Full Text Available Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing dependent plasticity (STDP, which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall.

  20. Distinct Effects of Perceptual Quality on Auditory Word Recognition, Memory Formation and Recall in a Neural Model of Sequential Memory

    Science.gov (United States)

    Miller, Paul; Wingfield, Arthur

    2010-01-01

    Adults with sensory impairment, such as reduced hearing acuity, have impaired ability to recall identifiable words, even when their memory is otherwise normal. We hypothesize that poorer stimulus quality causes weaker activity in neurons responsive to the stimulus and more time to elapse between stimulus onset and identification. The weaker activity and increased delay to stimulus identification reduce the necessary strengthening of connections between neurons active before stimulus presentation and neurons active at the time of stimulus identification. We test our hypothesis through a biologically motivated computational model, which performs item recognition, memory formation and memory retrieval. In our simulations, spiking neurons are distributed into pools representing either items or context, in two separate, but connected winner-takes-all (WTA) networks. We include associative, Hebbian learning, by comparing multiple forms of spike-timing-dependent plasticity (STDP), which strengthen synapses between coactive neurons during stimulus identification. Synaptic strengthening by STDP can be sufficient to reactivate neurons during recall if their activity during a prior stimulus rose strongly and rapidly. We find that a single poor quality stimulus impairs recall of neighboring stimuli as well as the weak stimulus itself. We demonstrate that within the WTA paradigm of word recognition, reactivation of separate, connected sets of non-word, context cells permits reverse recall. Also, only with such coactive context cells, does slowing the rate of stimulus presentation increase recall probability. We conclude that significant temporal overlap of neural activity patterns, absent from individual WTA networks, is necessary to match behavioral data for word recall. PMID:20631822

  1. AN EVALUATION OF AUDITORY LEARNING IN FILIAL IMPRINTING

    NARCIS (Netherlands)

    BOLHUIS, JJ; VANKAMPEN, HS

    The characteristics of auditory learning in filial imprinting in precocial birds are reviewed. Numerous studies have demonstrated that the addition of an auditory stimulus improves following of a visual stimulus. This paper evaluates whether there is genuine auditory imprinting, i.e. the formation

  2. [Auditory fatigue].

    Science.gov (United States)

    Sanjuán Juaristi, Julio; Sanjuán Martínez-Conde, Mar

    2015-01-01

    Given the relevance of possible hearing losses due to sound overloads and the short list of references of objective procedures for their study, we provide a technique that gives precise data about the audiometric profile and recruitment factor. Our objectives were to determine peripheral fatigue, through the cochlear microphonic response to sound pressure overload stimuli, as well as to measure recovery time, establishing parameters for differentiation with regard to current psychoacoustic and clinical studies. We used specific instruments for the study of cochlear microphonic response, plus a function generator that provided us with stimuli of different intensities and harmonic components. In Wistar rats, we first measured the normal microphonic response and then the effect of auditory fatigue on it. Using a 60dB pure tone acoustic stimulation, we obtained a microphonic response at 20dB. We then caused fatigue with 100dB of the same frequency, reaching a loss of approximately 11dB after 15minutes; after that, the deterioration slowed and did not exceed 15dB. By means of complex random tone maskers or white noise, no fatigue was caused to the sensory receptors, not even at levels of 100dB and over an hour of overstimulation. No fatigue was observed in terms of sensory receptors. Deterioration of peripheral perception through intense overstimulation may be due to biochemical changes of desensitisation due to exhaustion. Auditory fatigue in subjective clinical trials presumably affects supracochlear sections. The auditory fatigue tests found are not in line with those obtained subjectively in clinical and psychoacoustic trials. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Otorrinolaringología y Patología Cérvico-Facial. All rights reserved.

  3. LASER PHYSICS: Formation of XeCl excimer molecules as a result of mixing of gas streams excited by a continuous discharge

    Science.gov (United States)

    Mikhkel'soo, V. T.; Treshchalov, A. B.; Peét, V. É.; Yalviste, É. Kh; Belokon', A. A.; Braĭnin, B. I.; Khritov, K. M.

    1987-07-01

    A longitudinal continuous discharge in two independent supersonic gas streams, which were subsequently mixed, was used for nonequilibrium electronic excitation of components undergoing reactions and emitting chemiluminescence. Formation of XeCl excimer molecules as a result of mixing of excited He:Xe = 95:5 and He:HCl(Cl2) = 99:1 streams was deduced from the XeCl* fluorescence spectra (B→X and C→A bands). The steady-state concentration of the XeCl molecules in B and C states determined in the mixing region was ~1010 cm-3 when the pump power was 50 W, so that the efficiency of conversion of the input electrical energy into the excimer fluorescence was ~1%.

  4. Auditory Hallucination

    Directory of Open Access Journals (Sweden)

    MohammadReza Rajabi

    2003-09-01

    Full Text Available Auditory Hallucination or Paracusia is a form of hallucination that involves perceiving sounds without auditory stimulus. A common is hearing one or more talking voices which is associated with psychotic disorders such as schizophrenia or mania. Hallucination, itself, is the most common feature of perceiving the wrong stimulus or to the better word perception of the absence stimulus. Here we will discuss four definitions of hallucinations:1.Perceiving of a stimulus without the presence of any subject; 2. hallucination proper which are the wrong perceptions that are not the falsification of real perception, Although manifest as a new subject and happen along with and synchronously with a real perception;3. hallucination is an out-of-body perception which has no accordance with a real subjectIn a stricter sense, hallucinations are defined as perceptions in a conscious and awake state in the absence of external stimuli which have qualities of real perception, in that they are vivid, substantial, and located in external objective space. We are going to discuss it in details here.

  5. Measuring Auditory Selective Attention using Frequency Tagging

    Directory of Open Access Journals (Sweden)

    Hari M Bharadwaj

    2014-02-01

    Full Text Available Frequency tagging of sensory inputs (presenting stimuli that fluctuate periodically at rates to which the cortex can phase lock has been used to study attentional modulation of neural responses to inputs in different sensory modalities. For visual inputs, the visual steady-state response (VSSR at the frequency modulating an attended object is enhanced, while the VSSR to a distracting object is suppressed. In contrast, the effect of attention on the auditory steady-state response (ASSR is inconsistent across studies. However, most auditory studies analyzed results at the sensor level or used only a small number of equivalent current dipoles to fit cortical responses. In addition, most studies of auditory spatial attention used dichotic stimuli (independent signals at the ears rather than more natural, binaural stimuli. Here, we asked whether these methodological choices help explain discrepant results. Listeners attended to one of two competing speech streams, one simulated from the left and one from the right, that were modulated at different frequencies. Using distributed source modeling of magnetoencephalography results, we estimate how spatially directed attention modulates the ASSR in neural regions across the whole brain. Attention enhances the ASSR power at the frequency of the attended stream in the contralateral auditory cortex. The attended-stream modulation frequency also drives phase-locked responses in the left (but not right precentral sulcus (lPCS, a region implicated in control of eye gaze and visual spatial attention. Importantly, this region shows no phase locking to the distracting stream suggesting that the lPCS in engaged in an attention-specific manner. Modeling results that take account of the geometry and phases of the cortical sources phase locked to the two streams (including hemispheric asymmetry of lPCS activity help partly explain why past ASSR studies of auditory spatial attention yield seemingly contradictory

  6. Influence of ultrasound power on acoustic streaming and micro-bubbles formations in a low frequency sono-reactor: mathematical and 3D computational simulation.

    Science.gov (United States)

    Sajjadi, Baharak; Raman, Abdul Aziz Abdul; Ibrahim, Shaliza

    2015-05-01

    This paper aims at investigating the influence of ultrasound power amplitude on liquid behaviour in a low-frequency (24 kHz) sono-reactor. Three types of analysis were employed: (i) mechanical analysis of micro-bubbles formation and their activities/characteristics using mathematical modelling. (ii) Numerical analysis of acoustic streaming, fluid flow pattern, volume fraction of micro-bubbles and turbulence using 3D CFD simulation. (iii) Practical analysis of fluid flow pattern and acoustic streaming under ultrasound irradiation using Particle Image Velocimetry (PIV). In mathematical modelling, a lone micro bubble generated under power ultrasound irradiation was mechanistically analysed. Its characteristics were illustrated as a function of bubble radius, internal temperature and pressure (hot spot conditions) and oscillation (pulsation) velocity. The results showed that ultrasound power significantly affected the conditions of hotspots and bubbles oscillation velocity. From the CFD results, it was observed that the total volume of the micro-bubbles increased by about 4.95% with each 100 W-increase in power amplitude. Furthermore, velocity of acoustic streaming increased from 29 to 119 cm/s as power increased, which was in good agreement with the PIV analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. DWARFS GOBBLING DWARFS: A STELLAR TIDAL STREAM AROUND NGC 4449 AND HIERARCHICAL GALAXY FORMATION ON SMALL SCALES

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Delgado, David; Rix, Hans-Walter; Maccio, Andrea V. [Max-Planck-Institut fuer Astronomy, Heidelberg (Germany); Romanowsky, Aaron J.; Arnold, Jacob A.; Brodie, Jean P. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Jay Gabany, R. [Black Bird Observatory, Mayhill, New Mexico (United States); Annibali, Francesca [Osservatorio Astronomico di Bologna, INAF, Via Ranzani 1, I-40127 Bologna (Italy); Fliri, Juergen [LERMA, CNRS UMR 8112, Observatoire de Paris, 61 Avenue de l' Observatoire, F-75014 Paris (France); Zibetti, Stefano [Dark Cosmology Centre, Niels Bohr Institute-University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Van der Marel, Roeland P.; Aloisi, Alessandra [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Chonis, Taylor S. [Department of Astronomy, University of Texas at Austin, Texas (United States); Carballo-Bello, Julio A. [Instituto de Astrofisica de Canarias, Tenerife (Spain); Gallego-Laborda, J. [Fosca Nit Observatory, Montsec Astronomical Park, Ager (Spain); Merrifield, Michael R. [School of Physics and Astronomy, University of Nottingham, University Park, Nottingham NG7 2RD (United Kingdom)

    2012-04-01

    A candidate diffuse stellar substructure was previously reported in the halo of the nearby dwarf starburst galaxy NGC 4449 by Karachentsev et al. We map and analyze this feature using a unique combination of deep integrated-light images from the BlackBird 0.5 m telescope, and high-resolution wide-field images from the 8 m Subaru Telescope, which resolve the nebulosity into a stream of red giant branch stars, and confirm its physical association with NGC 4449. The properties of the stream imply a massive dwarf spheroidal progenitor, which after complete disruption will deposit an amount of stellar mass that is comparable to the existing stellar halo of the main galaxy. The stellar mass ratio between the two galaxies is {approx}1:50, while the indirectly measured dynamical mass ratio, when including dark matter, may be {approx}1:10-1:5. This system may thus represent a 'stealth' merger, where an infalling satellite galaxy is nearly undetectable by conventional means, yet has a substantial dynamical influence on its host galaxy. This singular discovery also suggests that satellite accretion can play a significant role in building up the stellar halos of low-mass galaxies, and possibly in triggering their starbursts.

  8. A songbird forebrain area potentially involved in auditory ...

    Indian Academy of Sciences (India)

    PRAKASH KUMAR G

    Auditory discrimination and learning in songbirds. 145. J. Biosci. 33(1) ... formation and/or storage. [Pinaud R and Terleph T A 2008 A songbird forebrain area potentially involved in auditory discrimination and memory formation; J. Biosci. ...... Otol. 96 101–112. Cynx J and Nottebohm F 1992 Role of gender, season, and.

  9. Objective assessment of stream segregation abilities of CI users as a function of electrode separation

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Madsen, Sara Miay Kim; Dau, Torsten

    Auditory streaming is a perceptual process by which the human auditory system organizes sounds from different sources into perceptually meaningful elements. Segregation of sound sources is important, among others, for understanding speech in noisy environments, which is especially challenging...... assessed obligatory stream segregation, little attention has been given to voluntary stream segregation, a process where the listener actively tries to segregate the sounds. It is therefore unclear whether CI users are able to experience voluntary stream segregation as a function of electrode separation...

  10. Streams with Strahler Stream Order

    Data.gov (United States)

    Minnesota Department of Natural Resources — Stream segments with Strahler stream order values assigned. As of 01/08/08 the linework is from the DNR24K stream coverages and will not match the updated...

  11. Quantifying attentional modulation of auditory-evoked cortical responses from single-trial electroencephalography

    Directory of Open Access Journals (Sweden)

    Inyong eChoi

    2013-04-01

    Full Text Available Selective auditory attention is essential for human listeners to be able to communicate in multi-source environments. Selective attention is known to modulate the neural representation of the auditory scene, boosting the representation of a target sound relative to the background, but the strength of this modulation, and the mechanisms contributing to it, are not well understood. Here, listeners performed a behavioral experiment demanding sustained, focused spatial auditory attention while we measured cortical responses using electroencephalography (EEG. We presented three concurrent melodic streams; listeners were asked to attend and analyze the melodic contour of one of the streams, randomly selected from trial to trial. In a control task, listeners heard the same sound mixtures, but performed the contour judgment task on a series of visual arrows, ignoring all auditory streams. We found that the cortical responses could be fit as weighted sum of event-related potentials evoked by the stimulus onsets in the competing streams. The weighting to a given stream was roughly 10 dB higher when it was attended compared to when another auditory stream was attended; during the visual task, the auditory gains were intermediate. We then used a template-matching classification scheme to classify single-trial EEG results. We found that in all subjects, we could determine which stream the subject was attending significantly better than by chance. By directly quantifying the effect of selective attention on auditory cortical responses, these results reveal that focused auditory attention both suppresses the response to an unattended stream and enhances the response to an attended stream. The single-trial classification results add to the growing body of literature suggesting that auditory attentional modulation is sufficiently robust that it could be used as a control mechanism in brain-computer interfaces.

  12. An interactive model of auditory-motor speech perception.

    Science.gov (United States)

    Liebenthal, Einat; Möttönen, Riikka

    2017-12-18

    Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Late Quaternary stream piracy and strath terrace formation along the Belle Fourche and lower Cheyenne Rivers, South Dakota and Wyoming

    Science.gov (United States)

    Stamm, John F.; Hendricks, Robert R.; Sawyer, J. Foster; Mahan, Shannon A.; Zaprowski, Brent J.; Geibel, Nicholas M.; Azzolini, David C.

    2013-09-01

    Stream piracy substantially affected the geomorphic evolution of the Missouri River watershed and drainages within, including the Little Missouri, Cheyenne, Belle Fourche, Bad, and White Rivers. The ancestral Cheyenne River eroded headward in an annular pattern around the eastern and southern Black Hills and pirated the headwaters of the ancestral Bad and White Rivers after ~ 660 ka. The headwaters of the ancestral Little Missouri River were pirated by the ancestral Belle Fourche River, a tributary to the Cheyenne River that currently drains much of the northern Black Hills. Optically stimulated luminescence (OSL) dating techniques were used to estimate the timing of this piracy event at ~ 22-21 ka. The geomorphic evolution of the Cheyenne and Belle Fourche Rivers is also expressed by regionally recognized strath terraces that include (from oldest to youngest) the Sturgis, Bear Butte, and Farmingdale terraces. Radiocarbon and OSL dates from fluvial deposits on these terraces indicate incision to the level of the Bear Butte terrace by ~ 63 ka, incision to the level of the Farmingdale terrace at ~ 40 ka, and incision to the level of the modern channel after ~ 12-9 ka. Similar dates of terrace incision have been reported for the Laramie and Wind River Ranges. Hypothesized causes of incision are the onset of colder climate during the middle Wisconsinan and the transition to the full-glacial climate of the late-Wisconsinan/Pinedale glaciation. Incision during the Holocene of the lower Cheyenne River is as much as ~ 80 m and is 3 to 4 times the magnitude of incision at ~ 63 ka and ~ 40 ka. The magnitude of incision during the Holocene might be due to a combined effect of three geomorphic processes acting in concert: glacial isostatic rebound in lower reaches (~ 40 m), a change from glacial to interglacial climate, and adjustments to increased watershed area resulting from piracy of the ancestral headwaters of the Little Missouri River.

  14. Processing Temporal Modulations in Binaural and Monaural Auditory Stimuli by Neurons in the Inferior Colliculus and Auditory Cortex

    OpenAIRE

    Fitzpatrick, Douglas C.; Roberts, Jason M.; Kuwada, Shigeyuki; Kim, Duck O.; Filipovic, Blagoje

    2009-01-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating inter...

  15. Neuromechanistic Model of Auditory Bistability.

    Directory of Open Access Journals (Sweden)

    James Rankin

    2015-11-01

    Full Text Available Sequences of higher frequency A and lower frequency B tones repeating in an ABA- triplet pattern are widely used to study auditory streaming. One may experience either an integrated percept, a single ABA-ABA- stream, or a segregated percept, separate but simultaneous streams A-A-A-A- and -B---B--. During minutes-long presentations, subjects may report irregular alternations between these interpretations. We combine neuromechanistic modeling and psychoacoustic experiments to study these persistent alternations and to characterize the effects of manipulating stimulus parameters. Unlike many phenomenological models with abstract, percept-specific competition and fixed inputs, our network model comprises neuronal units with sensory feature dependent inputs that mimic the pulsatile-like A1 responses to tones in the ABA- triplets. It embodies a neuronal computation for percept competition thought to occur beyond primary auditory cortex (A1. Mutual inhibition, adaptation and noise are implemented. We include slow NDMA recurrent excitation for local temporal memory that enables linkage across sound gaps from one triplet to the next. Percepts in our model are identified in the firing patterns of the neuronal units. We predict with the model that manipulations of the frequency difference between tones A and B should affect the dominance durations of the stronger percept, the one dominant a larger fraction of time, more than those of the weaker percept-a property that has been previously established and generalized across several visual bistable paradigms. We confirm the qualitative prediction with our psychoacoustic experiments and use the behavioral data to further constrain and improve the model, achieving quantitative agreement between experimental and modeling results. Our work and model provide a platform that can be extended to consider other stimulus conditions, including the effects of context and volition.

  16. LHCb trigger streams optimization

    Science.gov (United States)

    Derkach, D.; Kazeev, N.; Neychev, R.; Panin, A.; Trofimov, I.; Ustyuzhanin, A.; Vesterinen, M.

    2017-10-01

    The LHCb experiment stores around 1011 collision events per year. A typical physics analysis deals with a final sample of up to 107 events. Event preselection algorithms (lines) are used for data reduction. Since the data are stored in a format that requires sequential access, the lines are grouped into several output file streams, in order to increase the efficiency of user analysis jobs that read these data. The scheme efficiency heavily depends on the stream composition. By putting similar lines together and balancing the stream sizes it is possible to reduce the overhead. We present a method for finding an optimal stream composition. The method is applied to a part of the LHCb data (Turbo stream) on the stage where it is prepared for user physics analysis. This results in an expected improvement of 15% in the speed of user analysis jobs, and will be applied on data to be recorded in 2017.

  17. Water Fountains in the Sky: Streaming Water Jets from Aging Star Provide Clues to Planetary-Nebula Formation

    Science.gov (United States)

    2002-06-01

    Astronomers using the National Science Foundation's Very Long Baseline Array (VLBA) radio telescope have found that an aging star is spewing narrow, rotating streams of water molecules into space, like a jerking garden hose that has escaped its owner's grasp. The discovery may help resolve a longstanding mystery about how the stunningly beautiful objects called planetary nebulae are formed. Artist's Conception of W43A. Artist's conception of W43A, with the aging star surrounded by a disk of material and a precessing, twisted jet of molecules streaming away from it in two directions. Credit: Kirk Woellert/National Science Foundation. The astronomers used the VLBA, operated by the National Radio Astronomy Observatory, to study a star called W43A. W43A is about 8,500 light-years from Earth in the direction of the constellation Aquila, the eagle. This star has come to the end of its normal lifetime and, astronomers believe, is about to start forming a planetary nebula, a shell of brightly glowing gas lit by the hot ember into which the star will collapse. "A prime mystery about planetary nebulae is that many are not spherical even though the star from which they are ejected is a sphere," said Phillip Diamond, director of the MERLIN radio observatory at Jodrell Bank in England, and one of the researchers using the VLBA. "The spinning jets of water molecules we found coming from this star may be one mechanism for producing the structures seen in many planetary nebulae," he added. The research team, led by Hiroshi Imai of Japan's National Astronomical Observatory (now at the Joint Institute for VLBI in Europe, based in the Netherlands), also includes Kumiko Obara of the Mizusawa Astrogeodynamics Observatory and Kagoshima University; Toshihiro Omodaka, also of Kagoshima University; and Tetsuo Sasao of the Japanese National Astronomical Observatory. The scientists reported their findings in the June 20 issue of the scientific journal Nature. As stars similar to our Sun

  18. Stream Crossings

    Data.gov (United States)

    Vermont Center for Geographic Information — Physical measurements and attributes of stream crossing structures and adjacent stream reaches which are used to provide a relative rating of aquatic organism...

  19. Formation and stability of manganese-doped ZnS quantum dot monolayers determined by QCM-D and streaming potential measurements.

    Science.gov (United States)

    Oćwieja, Magdalena; Matras-Postołek, Katarzyna; Maciejewska-Prończuk, Julia; Morga, Maria; Adamczyk, Zbigniew; Sovinska, Svitlana; Żaba, Adam; Gajewska, Marta; Król, Tomasz; Cupiał, Klaudia; Bredol, Michael

    2017-10-01

    Manganese-doped ZnS quantum dots (QDs) stabilized by cysteamine hydrochloride were successfully synthesized. Their thorough physicochemical characteristics were acquired using UV-Vis absorption and photoluminescence spectroscopy, X-ray diffraction, dynamic light scattering (DLS), transmission electron microscopy (HR-TEM), energy dispersive spectroscopy (EDS) and Fourier transform infrared (FT-IR) spectroscopy. The average particle size, derived from HR-TEM, was 3.1nm, which agrees with the hydrodynamic diameter acquired by DLS, that was equal to 3-4nm, depending on ionic strength. The quantum dots also exhibited a large positive zeta potential varying between 75 and 36mV for ionic strength of 10-4 and 10-2M, respectively (at pH 6.2) and an intense luminescent emission at 590nm. The quantum yield was equal to 31% and the optical band gap energy was equal to 4.26eV. The kinetics of QD monolayer formation on silica substrates (silica sensors and oxidized silicon wafers) under convection-controlled transport was quantitatively evaluated by the quartz crystal microbalance (QCM) and the streaming potential measurements. A high stability of the monolayer for ionic strength 10-4 and 10-2M was confirmed in these measurements. The experimental data were adequately reflected by the extended random sequential adsorption model (eRSA). Additionally, thorough electrokinetic characteristics of the QD monolayers and their stability for various ionic strengths and pH were acquired by streaming potential measurements carried out under in situ conditions. These results were quantitatively interpreted in terms of the three-dimensional (3D) electrokinetic model that furnished bulk zeta potential of particles for high ionic strengths that is impractical by other experimental techniques. It is concluded that these results can be used for designing of biosensors of controlled monolayer structure capable to bind various ligands via covalent as well as electrostatic interactions. Copyright

  20. The shadow of a doubt ? Evidence for perceptuo-motor linkage during auditory and audiovisual close shadowing

    Directory of Open Access Journals (Sweden)

    Lucie eScarbel

    2014-06-01

    Full Text Available One classical argument in favor of a functional role of the motor system in speech perception comes from the close shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually-induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audio-visual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available.

  1. Spectral Weighting of Binaural Cues: Effect of Bandwidth and Stream Segregation

    DEFF Research Database (Denmark)

    Ahrens, Axel; Joshi, Suyash Narendra; Epp, Bastian

    Anecdotally, normal hearing listeners can attend to a single sound source in the presence of other sound sources by forming auditory objects. This is commonly referred to as the cocktail party effect. It is known that listeners use, among others, interaural disparities in time and intensity...... (referred to as ITD and ILD, respectively) to localize a sound source. An open question is, however, how ITD and ILD information is integrated over frequency, and how streaming affects auditory object formation using interaural disparities. ITD weighting functions were previously derived using inverted...... sensitivity thresholds of narrowband signals (Stern et al., 1988). This method does not take binaural interference (McFadden and Pasanen, 1976) into account and might not be applicable to more realistic broadband signals....

  2. Modeling auditory evoked brainstem responses to transient stimuli

    DEFF Research Database (Denmark)

    Rønne, Filip Munch; Dau, Torsten; Harte, James

    2012-01-01

    A quantitative model is presented that describes the formation of auditory brainstem responses (ABR) to tone pulses, clicks and rising chirps as a function of stimulation level. The model computes the convolution of the instantaneous discharge rates using the “humanized” nonlinear auditory-nerve ...

  3. Effects of selective attention on the electrophysiological representation of concurrent sounds in the human auditory cortex.

    Science.gov (United States)

    Bidet-Caulet, Aurélie; Fischer, Catherine; Besle, Julien; Aguera, Pierre-Emmanuel; Giard, Marie-Helene; Bertrand, Olivier

    2007-08-29

    In noisy environments, we use auditory selective attention to actively ignore distracting sounds and select relevant information, as during a cocktail party to follow one particular conversation. The present electrophysiological study aims at deciphering the spatiotemporal organization of the effect of selective attention on the representation of concurrent sounds in the human auditory cortex. Sound onset asynchrony was manipulated to induce the segregation of two concurrent auditory streams. Each stream consisted of amplitude modulated tones at different carrier and modulation frequencies. Electrophysiological recordings were performed in epileptic patients with pharmacologically resistant partial epilepsy, implanted with depth electrodes in the temporal cortex. Patients were presented with the stimuli while they either performed an auditory distracting task or actively selected one of the two concurrent streams. Selective attention was found to affect steady-state responses in the primary auditory cortex, and transient and sustained evoked responses in secondary auditory areas. The results provide new insights on the neural mechanisms of auditory selective attention: stream selection during sound rivalry would be facilitated not only by enhancing the neural representation of relevant sounds, but also by reducing the representation of irrelevant information in the auditory cortex. Finally, they suggest a specialization of the left hemisphere in the attentional selection of fine-grained acoustic information.

  4. The Process of Auditory Distraction: Disrupted Attention and Impaired Recall in a Simulated Lecture Environment

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E.

    2013-01-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for…

  5. A Face-on Accretion System in High-mass Star Formation: Possible Dusty Infall Streams within 100 AU

    Science.gov (United States)

    Motogi, Kazuhito; Hirota, Tomoya; Sorai, Kazuo; Yonekura, Yoshinori; Sugiyama, Koichiro; Honma, Mareki; Niinuma, Kotaro; Hachisuka, Kazuya; Fujisawa, Kenta; Walsh, Andrew J.

    2017-11-01

    We report on interferometric observations of a face-on accretion system around the high-mass young stellar object, G353.273+0.641. The innermost accretion system of 100 au radius was resolved in a 45 GHz continuum image taken with the Jansky-Very Large Array. Our spectral energy distribution analysis indicated that the continuum could be explained by optically thick dust emission. The total mass of the dusty system is ∼0.2 M ⊙ at minimum and up to a few M ⊙ depending on the dust parameters. 6.7 GHz CH3OH masers associated with the same system were also observed with the Australia Telescope Compact Array. The masers showed a spiral-like, non-axisymmetric distribution with a systematic velocity gradient. The line-of-sight velocity field is explained by an infall motion along a parabolic streamline that falls onto the equatorial plane of the face-on system. The streamline is quasi-radial and reaches the equatorial plane at a radius of 16 au. This is clearly smaller than that of typical accretion disks in high-mass star formation, indicating that the initial angular momentum was very small, or the CH3OH masers selectively trace accreting material that has small angular momentum. In the former case, the initial specific angular momentum is estimated to be 8 × 1020 ({M}* /10 M ⊙){}0.5 cm2 s‑1, or a significant fraction of the initial angular momentum was removed outside of 100 au. The physical origin of such a streamline is still an open question and will be constrained by the higher-resolution (∼10 mas) thermal continuum and line observations with ALMA long baselines.

  6. BAER - brainstem auditory evoked response

    Science.gov (United States)

    ... auditory potentials; Brainstem auditory evoked potentials; Evoked response audiometry; Auditory brainstem response; ABR; BAEP ... Normal results vary. Results will depend on the person and the instruments used to perform the test.

  7. Auditory Processing Disorder (For Parents)

    Science.gov (United States)

    ... role. Auditory cohesion problems: This is when higher-level listening tasks are difficult. Auditory cohesion skills — drawing inferences from conversations, understanding riddles, or comprehending verbal math problems — require heightened auditory processing and language levels. ...

  8. Influence of Auditory and Haptic Stimulation in Visual Perception

    Directory of Open Access Journals (Sweden)

    Shunichi Kawabata

    2011-10-01

    Full Text Available While many studies have shown that visual information affects perception in the other modalities, little is known about how auditory and haptic information affect visual perception. In this study, we investigated how auditory, haptic, or auditory and haptic stimulation affects visual perception. We used a behavioral task based on the subjects observing the phenomenon of two identical visual objects moving toward each other, overlapping and then continuing their original motion. Subjects may perceive the objects as either streaming each other or bouncing and reversing their direction of motion. With only visual motion stimulus, subjects usually report the objects as streaming, whereas if a sound or flash is played when the objects touch each other, subjects report the objects as bouncing (Bounce-Inducing Effect. In this study, “auditory stimulation”, “haptic stimulation” or “haptic and auditory stimulation” were presented at various times relative to the visual overlap of objects. Our result shows the bouncing rate when haptic and auditory stimulation were presented were the highest. This result suggests that the Bounce-Inducing Effect is enhanced by simultaneous modality presentation to visual motion. In the future, a neuroscience approach (eg, TMS, fMRI may be required to elucidate the brain mechanism in this study.

  9. Resizing Auditory Communities

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2012-01-01

    Heard through the ears of the Canadian composer and music teacher R. Murray Schafer the ideal auditory community had the shape of a village. Schafer’s work with the World Soundscape Project in the 70s represent an attempt to interpret contemporary environments through musical and auditory...

  10. Anatomical Pathways for Auditory Memory in Primates

    Directory of Open Access Journals (Sweden)

    Monica Munoz-Lopez

    2010-10-01

    Full Text Available Episodic memory or the ability to store context-rich information about everyday events depends on the hippocampal formation (entorhinal cortex, subiculum, presubiculum, parasubiculum, hippocampus proper, and dentate gyrus. A substantial amount of behavioral-lesion and anatomical studies have contributed to our understanding of the organization of how visual stimuli are retained in episodic memory. However, whether auditory memory is organized similarly is still unclear. One hypothesis is that, like the ‘visual ventral stream’ for which the connections of the inferior temporal gyrus with the perirhinal cortex are necessary for visual recognition in monkeys, direct connections between the auditory association areas of the superior temporal gyrus and the hippocampal formation and with the parahippocampal region (temporal pole, perhirinal, and posterior parahippocampal cortices might also underlie recognition memory for sounds. Alternatively, the anatomical organization of memory could be different in audition. This alternative ‘indirect stream’ hypothesis posits that, unlike the visual association cortex, the majority of auditory association cortex makes one or more synapses in intermediate, polymodal areas, where they may integrate information from other sensory modalities, before reaching the medial temporal memory system. This review considers anatomical studies that can support either one or both hypotheses – focusing on anatomical studies on the primate brain that have reported not only direct auditory association connections with medial temporal areas, but, importantly, also possible indirect pathways for auditory information to reach the medial temporal lobe memory system.

  11. Auditory-motor learning influences auditory memory for music.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2012-05-01

    In two experiments, we investigated how auditory-motor learning influences performers' memory for music. Skilled pianists learned novel melodies in four conditions: auditory only (listening), motor only (performing without sound), strongly coupled auditory-motor (normal performance), and weakly coupled auditory-motor (performing along with auditory recordings). Pianists' recognition of the learned melodies was better following auditory-only or auditory-motor (weakly coupled and strongly coupled) learning than following motor-only learning, and better following strongly coupled auditory-motor learning than following auditory-only learning. Auditory and motor imagery abilities modulated the learning effects: Pianists with high auditory imagery scores had better recognition following motor-only learning, suggesting that auditory imagery compensated for missing auditory feedback at the learning stage. Experiment 2 replicated the findings of Experiment 1 with melodies that contained greater variation in acoustic features. Melodies that were slower and less variable in tempo and intensity were remembered better following weakly coupled auditory-motor learning. These findings suggest that motor learning can aid performers' auditory recognition of music beyond auditory learning alone, and that motor learning is influenced by individual abilities in mental imagery and by variation in acoustic features.

  12. Music and the auditory brain: where is the connection?

    Directory of Open Access Journals (Sweden)

    Israel eNelken

    2011-09-01

    Full Text Available Sound processing by the auditory system is understood in unprecedented details, even compared with sensory coding in the visual system. Nevertheless, we don't understand yet the way in which some of the simplest perceptual properties of sounds are coded in neuronal activity. This poses serious difficulties for linking neuronal responses in the auditory system and music processing, since music operates on abstract representations of sounds. Paradoxically, although perceptual representations of sounds most probably occur high in auditory system or even beyond it, neuronal responses are strongly affected by the temporal organization of sound streams even in subcortical stations. Thus, to the extent that music is organized sound, it is the organization, rather than the sound, which is represented first in the auditory brain.

  13. Sensory Intelligence for Extraction of an Abstract Auditory Rule: A Cross-Linguistic Study.

    Science.gov (United States)

    Guo, Xiao-Tao; Wang, Xiao-Dong; Liang, Xiu-Yuan; Wang, Ming; Chen, Lin

    2018-02-21

    In a complex linguistic environment, while speech sounds can greatly vary, some shared features are often invariant. These invariant features constitute so-called abstract auditory rules. Our previous study has shown that with auditory sensory intelligence, the human brain can automatically extract the abstract auditory rules in the speech sound stream, presumably serving as the neural basis for speech comprehension. However, whether the sensory intelligence for extraction of abstract auditory rules in speech is inherent or experience-dependent remains unclear. To address this issue, we constructed a complex speech sound stream using auditory materials in Mandarin Chinese, in which syllables had a flat lexical tone but differed in other acoustic features to form an abstract auditory rule. This rule was occasionally and randomly violated by the syllables with the rising, dipping or falling tone. We found that both Chinese and foreign speakers detected the violations of the abstract auditory rule in the speech sound stream at a pre-attentive stage, as revealed by the whole-head recordings of mismatch negativity (MMN) in a passive paradigm. However, MMNs peaked earlier in Chinese speakers than in foreign speakers. Furthermore, Chinese speakers showed different MMN peak latencies for the three deviant types, which paralleled recognition points. These findings indicate that the sensory intelligence for extraction of abstract auditory rules in speech sounds is innate but shaped by language experience. Copyright © 2018 IBRO. Published by Elsevier Ltd. All rights reserved.

  14. Stream systems.

    Science.gov (United States)

    Jack E. Williams; Gordon H. Reeves

    2006-01-01

    Restored, high-quality streams provide innumerable benefits to society. In the Pacific Northwest, high-quality stream habitat often is associated with an abundance of salmonid fishes such as chinook salmon (Oncorhynchus tshawytscha), coho salmon (O. kisutch), and steelhead (O. mykiss). Many other native...

  15. Formats

    Directory of Open Access Journals (Sweden)

    Gehmann, Ulrich

    2012-03-01

    Full Text Available In the following, a new conceptual framework for investigating nowadays’ “technical” phenomena shall be introduced, that of formats. The thesis is that processes of formatting account for our recent conditions of life, and will do so in the very next future. It are processes whose foundations have been laid in modernity and which will further unfold for the time being. These processes are embedded in the format of the value chain, a circumstance making them resilient to change. In addition, they are resilient in themselves since forming interconnected systems of reciprocal causal circuits.Which leads to an overall situation that our entire “Lebenswelt” became formatted to an extent we don’t fully realize, even influencing our very percep-tion of it.

  16. Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Jafari

    2002-07-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depressin, and hyperacute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of The Sound of a Moracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  17. Review: Auditory Integration Training

    Directory of Open Access Journals (Sweden)

    Zahra Ja'fari

    2003-01-01

    Full Text Available Auditory integration training (AIT is a hearing enhancement training process for sensory input anomalies found in individuals with autism, attention deficit hyperactive disorder, dyslexia, hyperactivity, learning disability, language impairments, pervasive developmental disorder, central auditory processing disorder, attention deficit disorder, depression, and hyper acute hearing. AIT, recently introduced in the United States, and has received much notice of late following the release of the sound of a miracle, by Annabel Stehli. In her book, Mrs. Stehli describes before and after auditory integration training experiences with her daughter, who was diagnosed at age four as having autism.

  18. Auditory Grouping Mechanisms Reflect a Sound’s Relative Position in a Sequence

    Directory of Open Access Journals (Sweden)

    Kevin Thomas Hill

    2012-06-01

    Full Text Available The human brain uses acoustic cues to decompose complex auditory scenes into its components. For instance to improve communication, a listener can select an individual stream, such as a talker in a crowded room, based on cues such as pitch or location. Despite numerous investigations into auditory streaming, few have demonstrated clear correlates of perception; instead, in many studies perception covaries with changes in physical stimulus properties (e.g. frequency separation. In the current report, we employ a classic ABA streaming paradigm and human electroencephalography (EEG to disentangle the individual contributions of stimulus properties from changes in auditory perception. We find that changes in perceptual state – that is the perception of one versus two auditory streams with physically identical stimuli – and changes in physical stimulus properties are reflected independently in the event-related potential (ERP during overlapping time windows. These findings emphasize the necessity of controlling for stimulus properties when studying perceptual effects of streaming. Furthermore, the independence of the perceptual effect from stimulus properties suggests the neural correlates of streaming reflect a tone’s relative position within a larger sequence (1st, 2nd, 3rd rather than its acoustics. By clarifying the role of stimulus attributes along with perceptual changes, this study helps explain precisely how the brain is able to distinguish a sound source of interest in an auditory scene.

  19. Aktiverende Undervisning i auditorier

    DEFF Research Database (Denmark)

    Parus, Judith

    Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig.......Workshop om erfaringer og brug af aktiverende metoder i undervisning i auditorier og på store hold. Hvilke metoder har fungeret godt og hvilke dårligt ? Hvilke overvejelser skal man gøre sig....

  20. Stream Evaluation

    Data.gov (United States)

    Kansas Data Access and Support Center — Digital representation of the map accompanying the "Kansas stream and river fishery resource evaluation" (R.E. Moss and K. Brunson, 1981.U.S. Fish and Wildlife...

  1. ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    I Made Oka Widyantara

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG-DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  2. Wernicke’s Area Revisited: Parallel Streams and Word Processing

    OpenAIRE

    DeWitt, Iain; Rauschecker, Josef P.

    2013-01-01

    Auditory word-form recognition was originally proposed by Wernicke to occur within left superior temporal gyrus (STG), later further specified to be in posterior STG. To account for clinical observations (specifically paraphasia), Wernicke proposed his sensory speech center was also essential for correcting output from frontal speech-motor regions. Recent work, in contrast, has established a role for anterior STG, part of the auditory ventral stream, in the recognition of species-specific voc...

  3. Auditory hallucinations induced by trazodone

    Science.gov (United States)

    Shiotsuki, Ippei; Terao, Takeshi; Ishii, Nobuyoshi; Hatano, Koji

    2014-01-01

    A 26-year-old female outpatient presenting with a depressive state suffered from auditory hallucinations at night. Her auditory hallucinations did not respond to blonanserin or paliperidone, but partially responded to risperidone. In view of the possibility that her auditory hallucinations began after starting trazodone, trazodone was discontinued, leading to a complete resolution of her auditory hallucinations. Furthermore, even after risperidone was decreased and discontinued, her auditory hallucinations did not recur. These findings suggest that trazodone may induce auditory hallucinations in some susceptible patients. PMID:24700048

  4. Tuning in to the voices: a multisite FMRI study of auditory hallucinations.

    Science.gov (United States)

    Ford, Judith M; Roach, Brian J; Jorgensen, Kasper W; Turner, Jessica A; Brown, Gregory G; Notestine, Randy; Bischoff-Grethe, Amanda; Greve, Douglas; Wible, Cynthia; Lauriello, John; Belger, Aysenil; Mueller, Bryon A; Calhoun, Vincent; Preda, Adrian; Keator, David; O'Leary, Daniel S; Lim, Kelvin O; Glover, Gary; Potkin, Steven G; Mathalon, Daniel H

    2009-01-01

    Auditory hallucinations or voices are experienced by 75% of people diagnosed with schizophrenia. We presumed that auditory cortex of schizophrenia patients who experience hallucinations is tonically "tuned" to internal auditory channels, at the cost of processing external sounds, both speech and nonspeech. Accordingly, we predicted that patients who hallucinate would show less auditory cortical activation to external acoustic stimuli than patients who did not. At 9 Functional Imaging Biomedical Informatics Research Network (FBIRN) sites, whole-brain images from 106 patients and 111 healthy comparison subjects were collected while subjects performed an auditory target detection task. Data were processed with the FBIRN processing stream. A region of interest analysis extracted activation values from primary (BA41) and secondary auditory cortex (BA42), auditory association cortex (BA22), and middle temporal gyrus (BA21). Patients were sorted into hallucinators (n = 66) and nonhallucinators (n = 40) based on symptom ratings done during the previous week. Hallucinators had less activation to probe tones in left primary auditory cortex (BA41) than nonhallucinators. This effect was not seen on the right. Although "voices" are the anticipated sensory experience, it appears that even primary auditory cortex is "turned on" and "tuned in" to process internal acoustic information at the cost of processing external sounds. Although this study was not designed to probe cortical competition for auditory resources, we were able to take advantage of the data and find significant effects, perhaps because of the power afforded by such a large sample.

  5. Integration of auditory and visual speech information

    NARCIS (Netherlands)

    Hall, M.; Smeele, P.M.T.; Kuhl, P.K.

    1998-01-01

    The integration of auditory and visual speech is observed when modes specify different places of articulation. Influences of auditory variation on integration were examined using consonant identifi-cation, plus quality and similarity ratings. Auditory identification predicted auditory-visual

  6. Understanding Agenda Setting in State Educational Policy: An Application of Kingdon's Multiple Streams Model to the Formation of State Reading Policy

    Science.gov (United States)

    Young, Tamara V.; Shepley, Thomas V.; Song, Mengli

    2010-01-01

    Drawing on interview data from reading policy actors in California, Michigan, and Texas, this study applied Kingdon's (1984, 1995) multiple streams model to explain how the issue of reading became prominent on the agenda of state governments during the latter half of the 1990s. A combination of factors influenced the status of a state's reading…

  7. Geochemical insights to the formation of "sedimentary buffers": Considering the role of tributary-trunk stream interactions on catchment-scale sediment flux and drainage network dynamics

    Science.gov (United States)

    Fryirs, Kirstie; Gore, Damian B.

    2014-08-01

    The concept of disconnectivity (or decoupling) of sediment movement in river systems is an important concept in analyses of sediment flux in catchments. At the catchment scale, various blockages-termed buffers, barriers and blankets-form along the sediment cascade, interrupting the conveyance of sediments downstream. Long-lived buffers can control aspects of catchment sediment flux for an extended period. The upper Hunter catchment has a highly disconnected sediment cascade. The most highly disconnected subcatchment (Dart Brook) contains a distinct type of buffer, a trapped tributary fill, in its downstream reaches, reducing the effective catchment area of the upper Hunter catchment by ~ 18%. We test the use of elemental analyses provided by X-ray fluorescence (XRF) spectrometry of homogenous sediment profiles taken from floodplain bank exposures to determine that the geochemical composition of the sediments that make up this trapped-tributary fill system have been derived from two distinct source areas (the tributary system and the trunk stream). Over at least the Holocene, sedimentation along the axis of the Hunter River valley (the trunk stream) has formed an impediment to sediment conveyance along the lower tributary catchment, essentially "trapping" the tributary. We present an evolutionary model of how this type of "blockage" has formed and discuss implications of tributary-trunk stream (dis)connectivity in analysis of catchment-scale sediment flux and drainage network dynamics. In this case, a relatively large tributary network is having a "geomorphically insignificant" impact on trunk stream dynamics.

  8. Octave effect in auditory attention

    National Research Council Canada - National Science Library

    Tobias Borra; Huib Versnel; Chantal Kemner; A. John van Opstal; Raymond van Ee

    2013-01-01

    ... tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone...

  9. Automatic cortical representation of auditory pitch changes in Rett syndrome.

    Science.gov (United States)

    Foxe, John J; Burke, Kelly M; Andrade, Gizely N; Djukic, Aleksandra; Frey, Hans-Peter; Molholm, Sophie

    2016-01-01

    Over the typical course of Rett syndrome, initial language and communication abilities deteriorate dramatically between the ages of 1 and 4 years, and a majority of these children go on to lose all oral communication abilities. It becomes extremely difficult for clinicians and caretakers to accurately assess the level of preserved auditory functioning in these children, an issue of obvious clinical import. Non-invasive electrophysiological techniques allow for the interrogation of auditory cortical processing without the need for overt behavioral responses. In particular, the mismatch negativity (MMN) component of the auditory evoked potential (AEP) provides an excellent and robust dependent measure of change detection and auditory sensory memory. Here, we asked whether females with Rett syndrome would produce the MMN to occasional changes in pitch in a regularly occurring stream of auditory tones. Fourteen girls with genetically confirmed Rett syndrome and 22 age-matched neurotypical controls participated (ages 3.9-21.1 years). High-density electrophysiological recordings from 64 scalp electrodes were made while participants passively listened to a regularly occurring stream of 503-Hz auditory tone pips that was occasionally (15 % of presentations) interrupted by a higher-pitched deviant tone of 996 Hz. The MMN was derived by subtracting the AEP to these deviants from the AEP produced to the standard. Despite clearly anomalous morphology and latency of the AEP to simple pure-tone inputs in Rett syndrome, the MMN response was evident in both neurotypicals and Rett patients. However, we found that the pitch-evoked MMN was both delayed and protracted in duration in Rett, pointing to slowing of auditory responsiveness. The presence of the MMN in Rett patients suggests preserved abilities to process pitch changes in auditory sensory memory. This work represents a beginning step in an effort to comprehensively map the extent of auditory cortical functioning in Rett

  10. Visual speech gestures modulate efferent auditory system.

    Science.gov (United States)

    Namasivayam, Aravind Kumar; Wong, Wing Yiu Stephanie; Sharma, Dinaay; van Lieshout, Pascal

    2015-03-01

    Visual and auditory systems interact at both cortical and subcortical levels. Studies suggest a highly context-specific cross-modal modulation of the auditory system by the visual system. The present study builds on this work by sampling data from 17 young healthy adults to test whether visual speech stimuli evoke different responses in the auditory efferent system compared to visual non-speech stimuli. The descending cortical influences on medial olivocochlear (MOC) activity were indirectly assessed by examining the effects of contralateral suppression of transient-evoked otoacoustic emissions (TEOAEs) at 1, 2, 3 and 4 kHz under three conditions: (a) in the absence of any contralateral noise (Baseline), (b) contralateral noise + observing facial speech gestures related to productions of vowels /a/ and /u/ and (c) contralateral noise + observing facial non-speech gestures related to smiling and frowning. The results are based on 7 individuals whose data met strict recording criteria and indicated a significant difference in TEOAE suppression between observing speech gestures relative to the non-speech gestures, but only at the 1 kHz frequency. These results suggest that observing a speech gesture compared to a non-speech gesture may trigger a difference in MOC activity, possibly to enhance peripheral neural encoding. If such findings can be reproduced in future research, sensory perception models and theories positing the downstream convergence of unisensory streams of information in the cortex may need to be revised.

  11. Auditory and Visual Sensations

    CERN Document Server

    Ando, Yoichi

    2010-01-01

    Professor Yoichi Ando, acoustic architectural designer of the Kirishima International Concert Hall in Japan, presents a comprehensive rational-scientific approach to designing performance spaces. His theory is based on systematic psychoacoustical observations of spatial hearing and listener preferences, whose neuronal correlates are observed in the neurophysiology of the human brain. A correlation-based model of neuronal signal processing in the central auditory system is proposed in which temporal sensations (pitch, timbre, loudness, duration) are represented by an internal autocorrelation representation, and spatial sensations (sound location, size, diffuseness related to envelopment) are represented by an internal interaural crosscorrelation function. Together these two internal central auditory representations account for the basic auditory qualities that are relevant for listening to music and speech in indoor performance spaces. Observed psychological and neurophysiological commonalities between auditor...

  12. Role of hydrous iron oxide formation in attenuation and diel cycling of dissolved trace metals in a stream affected by acid rock drainage

    Science.gov (United States)

    Parker, S.R.; Gammons, C.H.; Jones, Clain A.; Nimick, D.A.

    2007-01-01

    Mining-impacted streams have been shown to undergo diel (24-h) fluctuations in concentrations of major and trace elements. Fisher Creek in south-central Montana, USA receives acid rock drainage (ARD) from natural and mining-related sources. A previous diel field study found substantial changes in dissolved metal concentrations at three sites with differing pH regimes during a 24-h period in August 2002. The current work discusses follow-up field sampling of Fisher Creek as well as field and laboratory experiments that examine in greater detail the underlying processes involved in the observed diel concentration changes. The field experiments employed in-stream chambers that were either transparent or opaque to light, filled with stream water and sediment (cobbles coated with hydrous Fe and Al oxides), and placed in the stream to maintain the same temperature. Three sets of laboratory experiments were performed: (1) equilibration of a Cu(II) and Zn(II) containing solution with Fisher Creek stream sediment at pH 6.9 and different temperatures; (2) titration of Fisher Creek water from pH 3.1 to 7 under four different isothermal conditions; and (3) analysis of the effects of temperature on the interaction of an Fe(II) containing solution with Fisher Creek stream sediment under non-oxidizing conditions. Results of these studies are consistent with a model in which Cu, Fe(II), and to a lesser extent Zn, are adsorbed or co-precipitated with hydrous Fe and Al oxides as the pH of Fisher Creek increases from 5.3 to 7.0. The extent of metal attenuation is strongly temperature-dependent, being more pronounced in warm vs. cold water. Furthermore, the sorption/co-precipitation process is shown to be irreversible; once the Cu, Zn, and Fe(II) are removed from solution in warm water, a decrease in temperature does not release the metals back to the water column. ?? 2006 Springer Science+Business Media B.V.

  13. Incidental auditory category learning.

    Science.gov (United States)

    Gabay, Yafit; Dick, Frederic K; Zevin, Jason D; Holt, Lori L

    2015-08-01

    Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning. (c) 2015 APA, all rights reserved).

  14. Modelling auditory attention.

    Science.gov (United States)

    Kaya, Emine Merve; Elhilali, Mounya

    2017-02-19

    Sounds in everyday life seldom appear in isolation. Both humans and machines are constantly flooded with a cacophony of sounds that need to be sorted through and scoured for relevant information-a phenomenon referred to as the 'cocktail party problem'. A key component in parsing acoustic scenes is the role of attention, which mediates perception and behaviour by focusing both sensory and cognitive resources on pertinent information in the stimulus space. The current article provides a review of modelling studies of auditory attention. The review highlights how the term attention refers to a multitude of behavioural and cognitive processes that can shape sensory processing. Attention can be modulated by 'bottom-up' sensory-driven factors, as well as 'top-down' task-specific goals, expectations and learned schemas. Essentially, it acts as a selection process or processes that focus both sensory and cognitive resources on the most relevant events in the soundscape; with relevance being dictated by the stimulus itself (e.g. a loud explosion) or by a task at hand (e.g. listen to announcements in a busy airport). Recent computational models of auditory attention provide key insights into its role in facilitating perception in cluttered auditory scenes.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.

  15. Auditory Channel Problems.

    Science.gov (United States)

    Mann, Philip H.; Suiter, Patricia A.

    This teacher's guide contains a list of general auditory problem areas where students have the following problems: (a) inability to find or identify source of sound; (b) difficulty in discriminating sounds of words and letters; (c) difficulty with reproducing pitch, rhythm, and melody; (d) difficulty in selecting important from unimportant sounds;…

  16. Pupillometry shows the effort of auditory attention switching.

    Science.gov (United States)

    McCloy, Daniel R; Lau, Bonnie K; Larson, Eric; Pratt, Katherine A I; Lee, Adrian K C

    2017-04-01

    Successful speech communication often requires selective attention to a target stream amidst competing sounds, as well as the ability to switch attention among multiple interlocutors. However, auditory attention switching negatively affects both target detection accuracy and reaction time, suggesting that attention switches carry a cognitive cost. Pupillometry is one method of assessing mental effort or cognitive load. Two experiments were conducted to determine whether the effort associated with attention switches is detectable in the pupillary response. In both experiments, pupil dilation, target detection sensitivity, and reaction time were measured; the task required listeners to either maintain or switch attention between two concurrent speech streams. Secondary manipulations explored whether switch-related effort would increase when auditory streaming was harder. In experiment 1, spatially distinct stimuli were degraded by simulating reverberation (compromising across-time streaming cues), and target-masker talker gender match was also varied. In experiment 2, diotic streams separable by talker voice quality and pitch were degraded by noise vocoding, and the time alloted for mid-trial attention switching was varied. All trial manipulations had some effect on target detection sensitivity and/or reaction time; however, only the attention-switching manipulation affected the pupillary response: greater dilation was observed in trials requiring switching attention between talkers.

  17. Auditory object cognition in dementia

    Science.gov (United States)

    Goll, Johanna C.; Kim, Lois G.; Hailstone, Julia C.; Lehmann, Manja; Buckley, Aisling; Crutch, Sebastian J.; Warren, Jason D.

    2011-01-01

    The cognition of nonverbal sounds in dementia has been relatively little explored. Here we undertook a systematic study of nonverbal sound processing in patient groups with canonical dementia syndromes comprising clinically diagnosed typical amnestic Alzheimer's disease (AD; n = 21), progressive nonfluent aphasia (PNFA; n = 5), logopenic progressive aphasia (LPA; n = 7) and aphasia in association with a progranulin gene mutation (GAA; n = 1), and in healthy age-matched controls (n = 20). Based on a cognitive framework treating complex sounds as ‘auditory objects’, we designed a novel neuropsychological battery to probe auditory object cognition at early perceptual (sub-object), object representational (apperceptive) and semantic levels. All patients had assessments of peripheral hearing and general neuropsychological functions in addition to the experimental auditory battery. While a number of aspects of auditory object analysis were impaired across patient groups and were influenced by general executive (working memory) capacity, certain auditory deficits had some specificity for particular dementia syndromes. Patients with AD had a disproportionate deficit of auditory apperception but preserved timbre processing. Patients with PNFA had salient deficits of timbre and auditory semantic processing, but intact auditory size and apperceptive processing. Patients with LPA had a generalised auditory deficit that was influenced by working memory function. In contrast, the patient with GAA showed substantial preservation of auditory function, but a mild deficit of pitch direction processing and a more severe deficit of auditory apperception. The findings provide evidence for separable stages of auditory object analysis and separable profiles of impaired auditory object cognition in different dementia syndromes. PMID:21689671

  18. Auditory Reserve and the Legacy of Auditory Experience

    OpenAIRE

    Skoe, Erika; Kraus, Nina

    2014-01-01

    Musical training during childhood has been linked to more robust encoding of sound later in life. We take this as evidence for an auditory reserve: a mechanism by which individuals capitalize on earlier life experiences to promote auditory processing. We assert that early auditory experiences guide how the reserve develops and is maintained over the lifetime. Experiences that occur after childhood, or which are limited in nature, are theorized to affect the reserve, although their influence o...

  19. Early hominin auditory capacities.

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G; Thackeray, J Francis; Arsuaga, Juan Luis

    2015-09-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats.

  20. Early hominin auditory capacities

    Science.gov (United States)

    Quam, Rolf; Martínez, Ignacio; Rosa, Manuel; Bonmatí, Alejandro; Lorenzo, Carlos; de Ruiter, Darryl J.; Moggi-Cecchi, Jacopo; Conde Valverde, Mercedes; Jarabo, Pilar; Menter, Colin G.; Thackeray, J. Francis; Arsuaga, Juan Luis

    2015-01-01

    Studies of sensory capacities in past life forms have offered new insights into their adaptations and lifeways. Audition is particularly amenable to study in fossils because it is strongly related to physical properties that can be approached through their skeletal structures. We have studied the anatomy of the outer and middle ear in the early hominin taxa Australopithecus africanus and Paranthropus robustus and estimated their auditory capacities. Compared with chimpanzees, the early hominin taxa are derived toward modern humans in their slightly shorter and wider external auditory canal, smaller tympanic membrane, and lower malleus/incus lever ratio, but they remain primitive in the small size of their stapes footplate. Compared with chimpanzees, both early hominin taxa show a heightened sensitivity to frequencies between 1.5 and 3.5 kHz and an occupied band of maximum sensitivity that is shifted toward slightly higher frequencies. The results have implications for sensory ecology and communication, and suggest that the early hominin auditory pattern may have facilitated an increased emphasis on short-range vocal communication in open habitats. PMID:26601261

  1. Auditory Perceptual Abilities Are Associated with Specific Auditory Experience

    Directory of Open Access Journals (Sweden)

    Yael Zaltz

    2017-11-01

    Full Text Available The extent to which auditory experience can shape general auditory perceptual abilities is still under constant debate. Some studies show that specific auditory expertise may have a general effect on auditory perceptual abilities, while others show a more limited influence, exhibited only in a relatively narrow range associated with the area of expertise. The current study addresses this issue by examining experience-dependent enhancement in perceptual abilities in the auditory domain. Three experiments were performed. In the first experiment, 12 pop and rock musicians and 15 non-musicians were tested in frequency discrimination (DLF, intensity discrimination, spectrum discrimination (DLS, and time discrimination (DLT. Results showed significant superiority of the musician group only for the DLF and DLT tasks, illuminating enhanced perceptual skills in the key features of pop music, in which miniscule changes in amplitude and spectrum are not critical to performance. The next two experiments attempted to differentiate between generalization and specificity in the influence of auditory experience, by comparing subgroups of specialists. First, seven guitar players and eight percussionists were tested in the DLF and DLT tasks that were found superior for musicians. Results showed superior abilities on the DLF task for guitar players, though no difference between the groups in DLT, demonstrating some dependency of auditory learning on the specific area of expertise. Subsequently, a third experiment was conducted, testing a possible influence of vowel density in native language on auditory perceptual abilities. Ten native speakers of German (a language characterized by a dense vowel system of 14 vowels, and 10 native speakers of Hebrew (characterized by a sparse vowel system of five vowels, were tested in a formant discrimination task. This is the linguistic equivalent of a DLS task. Results showed that German speakers had superior formant

  2. An auditory display tool for DNA sequence analysis.

    Science.gov (United States)

    Temple, Mark D

    2017-04-24

    DNA Sonification refers to the use of an auditory display to convey the information content of DNA sequence data. Six sonification algorithms are presented that each produce an auditory display. These algorithms are logically designed from the simple through to the more complex. Three of these parse individual nucleotides, nucleotide pairs or codons into musical notes to give rise to 4, 16 or 64 notes, respectively. Codons may also be parsed degenerately into 20 notes with respect to the genetic code. Lastly nucleotide pairs can be parsed as two separate frames or codons can be parsed as three reading frames giving rise to multiple streams of audio. The most informative sonification algorithm reads the DNA sequence as codons in three reading frames to produce three concurrent streams of audio in an auditory display. This approach is advantageous since start and stop codons in either frame have a direct affect to start or stop the audio in that frame, leaving the other frames unaffected. Using these methods, DNA sequences such as open reading frames or repetitive DNA sequences can be distinguished from one another. These sonification tools are available through a webpage interface in which an input DNA sequence can be processed in real time to produce an auditory display playable directly within the browser. The potential of this approach as an analytical tool is discussed with reference to auditory displays derived from test sequences including simple nucleotide sequences, repetitive DNA sequences and coding or non-coding genes. This study presents a proof-of-concept that some properties of a DNA sequence can be identified through sonification alone and argues for their inclusion within the toolkit of DNA sequence browsers as an adjunct to existing visual and analytical tools.

  3. Auditory Discrimination and Auditory Sensory Behaviours in Autism Spectrum Disorders

    Science.gov (United States)

    Jones, Catherine R. G.; Happe, Francesca; Baird, Gillian; Simonoff, Emily; Marsden, Anita J. S.; Tregay, Jenifer; Phillips, Rebecca J.; Goswami, Usha; Thomson, Jennifer M.; Charman, Tony

    2009-01-01

    It has been hypothesised that auditory processing may be enhanced in autism spectrum disorders (ASD). We tested auditory discrimination ability in 72 adolescents with ASD (39 childhood autism; 33 other ASD) and 57 IQ and age-matched controls, assessing their capacity for successful discrimination of the frequency, intensity and duration…

  4. Auditory and non-auditory effects of noise on health

    NARCIS (Netherlands)

    Basner, M.; Babisch, W.; Davis, A.; Brink, M.; Clark, C.; Janssen, S.A.; Stansfeld, S.

    2013-01-01

    Noise is pervasive in everyday life and can cause both auditory and non-auditory health eff ects. Noise-induced hearing loss remains highly prevalent in occupational settings, and is increasingly caused by social noise exposure (eg, through personal music players). Our understanding of molecular

  5. The Central Auditory Processing Kit[TM]. Book 1: Auditory Memory [and] Book 2: Auditory Discrimination, Auditory Closure, and Auditory Synthesis [and] Book 3: Auditory Figure-Ground, Auditory Cohesion, Auditory Binaural Integration, and Compensatory Strategies.

    Science.gov (United States)

    Mokhemar, Mary Ann

    This kit for assessing central auditory processing disorders (CAPD), in children in grades 1 through 8 includes 3 books, 14 full-color cards with picture scenes, and a card depicting a phone key pad, all contained in a sturdy carrying case. The units in each of the three books correspond with auditory skill areas most commonly addressed in…

  6. Happiness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia Pilar; Parmentier, Fabrice B R

    2016-08-01

    Rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards) capture attention and impair behavioural performance in an ongoing visual task. Recent evidence indicates that this effect is increased by sadness in a task involving neutral stimuli. We tested the hypothesis that such effect may not be limited to negative emotions but reflect a general depletion of attentional resources by examining whether a positive emotion (happiness) would increase deviance distraction too. Prior to performing an auditory-visual oddball task, happiness or a neutral mood was induced in participants by means of the exposure to music and the recollection of an autobiographical event. Results from the oddball task showed significantly larger deviance distraction following the induction of happiness. Interestingly, the small amount of distraction typically observed on the standard trial following a deviant trial (post-deviance distraction) was not increased by happiness. We speculate that happiness might interfere with the disengagement of attention from the deviant sound back towards the target stimulus (through the depletion of cognitive resources and/or mind wandering) but help subsequent cognitive control to recover from distraction. © 2015 The British Psychological Society.

  7. A Dual-Stream Neuroanatomy of Singing

    Science.gov (United States)

    Loui, Psyche

    2015-01-01

    Singing requires effortless and efficient use of auditory and motor systems that center around the perception and production of the human voice. Although perception and production are usually tightly coupled functions, occasional mismatches between the two systems inform us of dissociable pathways in the brain systems that enable singing. Here I review the literature on perception and production in the auditory modality, and propose a dual-stream neuroanatomical model that subserves singing. I will discuss studies surrounding the neural functions of feedforward, feedback, and efference systems that control vocal monitoring, as well as the white matter pathways that connect frontal and temporal regions that are involved in perception and production. I will also consider disruptions of the perception-production network that are evident in tone-deaf individuals and poor pitch singers. Finally, by comparing expert singers against other musicians and nonmusicians, I will evaluate the possibility that singing training might offer rehabilitation from these disruptions through neuroplasticity of the perception-production network. Taken together, the best available evidence supports a model of dorsal and ventral pathways in auditory-motor integration that enables singing and is shared with language, music, speech, and human interactions in the auditory environment. PMID:26120242

  8. Partial Epilepsy with Auditory Features

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2004-07-01

    Full Text Available The clinical characteristics of 53 sporadic (S cases of idiopathic partial epilepsy with auditory features (IPEAF were analyzed and compared to previously reported familial (F cases of autosomal dominant partial epilepsy with auditory features (ADPEAF in a study at the University of Bologna, Italy.

  9. The Perception of Auditory Motion

    Science.gov (United States)

    Leung, Johahn

    2016-01-01

    The growing availability of efficient and relatively inexpensive virtual auditory display technology has provided new research platforms to explore the perception of auditory motion. At the same time, deployment of these technologies in command and control as well as in entertainment roles is generating an increasing need to better understand the complex processes underlying auditory motion perception. This is a particularly challenging processing feat because it involves the rapid deconvolution of the relative change in the locations of sound sources produced by rotational and translations of the head in space (self-motion) to enable the perception of actual source motion. The fact that we perceive our auditory world to be stable despite almost continual movement of the head demonstrates the efficiency and effectiveness of this process. This review examines the acoustical basis of auditory motion perception and a wide range of psychophysical, electrophysiological, and cortical imaging studies that have probed the limits and possible mechanisms underlying this perception. PMID:27094029

  10. Peripheral Auditory Mechanisms

    CERN Document Server

    Hall, J; Hubbard, A; Neely, S; Tubis, A

    1986-01-01

    How weIl can we model experimental observations of the peripheral auditory system'? What theoretical predictions can we make that might be tested'? It was with these questions in mind that we organized the 1985 Mechanics of Hearing Workshop, to bring together auditory researchers to compare models with experimental observations. Tbe workshop forum was inspired by the very successful 1983 Mechanics of Hearing Workshop in Delft [1]. Boston University was chosen as the site of our meeting because of the Boston area's role as a center for hearing research in this country. We made a special effort at this meeting to attract students from around the world, because without students this field will not progress. Financial support for the workshop was provided in part by grant BNS- 8412878 from the National Science Foundation. Modeling is a traditional strategy in science and plays an important role in the scientific method. Models are the bridge between theory and experiment. Tbey test the assumptions made in experim...

  11. Probing the time course of head-motion cues integration during auditory scene analysis.

    Science.gov (United States)

    Kondo, Hirohito M; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  12. Probing the time course of head-motion cues integration during auditory scene analysis

    Directory of Open Access Journals (Sweden)

    Hirohito M. Kondo

    2014-06-01

    Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  13. The Central Role of Recognition in Auditory Perception: A Neurobiological Model

    Science.gov (United States)

    McLachlan, Neil; Wilson, Sarah

    2010-01-01

    The model presents neurobiologically plausible accounts of sound recognition (including absolute pitch), neural plasticity involved in pitch, loudness and location information integration, and streaming and auditory recall. It is proposed that a cortical mechanism for sound identification modulates the spectrotemporal response fields of inferior…

  14. Infant auditory short-term memory for non-linguistic sounds.

    Science.gov (United States)

    Ross-Sheehy, Shannon; Newman, Rochelle S

    2015-04-01

    This research explores auditory short-term memory (STM) capacity for non-linguistic sounds in 10-month-old infants. Infants were presented with auditory streams composed of repeating sequences of either 2 or 4 unique instruments (e.g., flute, piano, cello; 350 or 700 ms in duration) followed by a 500-ms retention interval. These instrument sequences either stayed the same for every repetition (Constant) or changed by 1 instrument per sequence (Varying). Using the head-turn preference procedure, infant listening durations were recorded for each stream type (2- or 4-instrument sequences composed of 350- or 700-ms notes). Preference for the Varying stream was taken as evidence of auditory STM because detection of the novel instrument required memory for all of the instruments in a given sequence. Results demonstrate that infants listened longer to Varying streams for 2-instrument sequences, but not 4-instrument sequences, composed of 350-ms notes (Experiment 1), although this effect did not hold when note durations were increased to 700 ms (Experiment 2). Experiment 3 replicates and extends results from Experiments 1 and 2 and provides support for a duration account of capacity limits in infant auditory STM. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Restoration of Lowland Streams

    DEFF Research Database (Denmark)

    Osborne, L. L.; Bayley, P. B.; Higler, L. W. G.

    1993-01-01

    Sammenskrivning af resultater fra symposium: Lowland Streams Restoration Workshop, Lund, Sweden, August 1991......Sammenskrivning af resultater fra symposium: Lowland Streams Restoration Workshop, Lund, Sweden, August 1991...

  16. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex.

    Science.gov (United States)

    Fishman, Yonatan I; Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate "auditory objects" with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas.

  17. The Dawning of the Stream of Aquarius in RAVE

    Science.gov (United States)

    Williams, M. E. K.; Steinmetz, M.; Sharma, S.; Bland-Hawthorn, J.; de Jong, R. S.; Seabroke, G. M.; Helmi, A.; Freeman, K. C.; Binney, J.; Minchev, I.; Bienaymé, O.; Campbell, R.; Fulbright, J. P.; Gibson, B. K.; Gilmore, G. F.; Grebel, E. K.; Munari, U.; Navarro, J. F.; Parker, Q. A.; Reid, W.; Siebert, A.; Siviero, A.; Watson, F. G.; Wyse, R. F. G.; Zwitter, T.

    2011-02-01

    We identify a new, nearby (0.5kpc constellation of Aquarius, we name it the Aquarius Stream. We identify 15 members of the stream lying between 30° Aquarius stream is thus a specimen of ongoing hierarchical Galaxy formation, rare for being right in the solar suburb.

  18. Auditory hallucinations treated by radio headphones.

    Science.gov (United States)

    Feder, R

    1982-09-01

    A young man with chronic auditory hallucinations was treated according to the principle that increasing external auditory stimulation decreases the likelihood of auditory hallucinations. Listening to a radio through stereo headphones in conditions of low auditory stimulation eliminated the patient's hallucinations.

  19. Auditory short-term memory in the primate auditory cortex.

    Science.gov (United States)

    Scott, Brian H; Mishkin, Mortimer

    2016-06-01

    Sounds are fleeting, and assembling the sequence of inputs at the ear into a coherent percept requires auditory memory across various time scales. Auditory short-term memory comprises at least two components: an active ׳working memory' bolstered by rehearsal, and a sensory trace that may be passively retained. Working memory relies on representations recalled from long-term memory, and their rehearsal may require phonological mechanisms unique to humans. The sensory component, passive short-term memory (pSTM), is tractable to study in nonhuman primates, whose brain architecture and behavioral repertoire are comparable to our own. This review discusses recent advances in the behavioral and neurophysiological study of auditory memory with a focus on single-unit recordings from macaque monkeys performing delayed-match-to-sample (DMS) tasks. Monkeys appear to employ pSTM to solve these tasks, as evidenced by the impact of interfering stimuli on memory performance. In several regards, pSTM in monkeys resembles pitch memory in humans, and may engage similar neural mechanisms. Neural correlates of DMS performance have been observed throughout the auditory and prefrontal cortex, defining a network of areas supporting auditory STM with parallels to that supporting visual STM. These correlates include persistent neural firing, or a suppression of firing, during the delay period of the memory task, as well as suppression or (less commonly) enhancement of sensory responses when a sound is repeated as a ׳match' stimulus. Auditory STM is supported by a distributed temporo-frontal network in which sensitivity to stimulus history is an intrinsic feature of auditory processing. This article is part of a Special Issue entitled SI: Auditory working memory. Published by Elsevier B.V.

  20. Auditory-olfactory synesthesia coexisting with auditory-visual synesthesia.

    Science.gov (United States)

    Jackson, Thomas E; Sandramouli, Soupramanien

    2012-09-01

    Synesthesia is an unusual condition in which stimulation of one sensory modality causes an experience in another sensory modality or when a sensation in one sensory modality causes another sensation within the same modality. We describe a previously unreported association of auditory-olfactory synesthesia coexisting with auditory-visual synesthesia. Given that many types of synesthesias involve vision, it is important that the clinician provide these patients with the necessary information and support that is available.

  1. Musicians’ Online Performance during Auditory and Visual Statistical Learning Tasks

    Science.gov (United States)

    Mandikal Vasuki, Pragati R.; Sharma, Mridula; Ibrahim, Ronny K.; Arciuli, Joanne

    2017-01-01

    Musicians’ brains are considered to be a functional model of neuroplasticity due to the structural and functional changes associated with long-term musical training. In this study, we examined implicit extraction of statistical regularities from a continuous stream of stimuli—statistical learning (SL). We investigated whether long-term musical training is associated with better extraction of statistical cues in an auditory SL (aSL) task and a visual SL (vSL) task—both using the embedded triplet paradigm. Online measures, characterized by event related potentials (ERPs), were recorded during a familiarization phase while participants were exposed to a continuous stream of individually presented pure tones in the aSL task or individually presented cartoon figures in the vSL task. Unbeknown to participants, the stream was composed of triplets. Musicians showed advantages when compared to non-musicians in the online measure (early N1 and N400 triplet onset effects) during the aSL task. However, there were no differences between musicians and non-musicians for the vSL task. Results from the current study show that musical training is associated with enhancements in extraction of statistical cues only in the auditory domain. PMID:28352223

  2. Auditory Processing Training in Learning Disability

    OpenAIRE

    Nívea Franklin Chaves Martins; Hipólito Virgílio Magalhães Jr

    2006-01-01

    The aim of this case report was to promote a reflection about the importance of speech-therapy for stimulation a person with learning disability associated to language and auditory processing disorders. Data analysis considered the auditory abilities deficits identified in the first auditory processing test, held on April 30,2002 compared with the new auditory processing test done on May 13,2003,after one year of therapy directed to acoustic stimulation of auditory abilities disorders,in acco...

  3. Studi Kualitas Video Streaming Dengan Menggunakan Flexipacket Radio

    OpenAIRE

    Fadly, Auliya

    2014-01-01

    Saat ini sistem komunikasi dengan mengunakan Video Streaming seringkali menjadi alternatif dalam berkomunikasi. Salah satu software pilihan untuk layananan Video Streaming adalah Windows Media Encoder yang dapat dimanfaatkan sebagai software server dan software VLC (VIdeoLAN Client) yang dapat dimanfaatkan sebagai software Client Video Streaming. Terdapat beberapa format Video Streaming yang tersedia di layanan internet seperti FLV, Mp4, AVI dan lain-lain. Pada Tugas Akhir ini dilakukan an...

  4. Interactions between "what" and "when" in the auditory system: temporal predictability enhances repetition suppression.

    Science.gov (United States)

    Costa-Faidella, Jordi; Baldeweg, Torsten; Grimm, Sabine; Escera, Carles

    2011-12-14

    Neural activity in the auditory system decreases with repeated stimulation, matching stimulus probability in multiple timescales. This phenomenon, known as stimulus-specific adaptation, is interpreted as a neural mechanism of regularity encoding aiding auditory object formation. However, despite the overwhelming literature covering recordings from single-cell to scalp auditory-evoked potential (AEP), stimulation timing has received little interest. Here we investigated whether timing predictability enhances the experience-dependent modulation of neural activity associated with stimulus probability encoding. We used human electrophysiological recordings in healthy participants who were exposed to passive listening of sound sequences. Pure tones of different frequencies were delivered in successive trains of a variable number of repetitions, enabling the study of sequential repetition effects in the AEP. In the predictable timing condition, tones were delivered with isochronous interstimulus intervals; in the unpredictable timing condition, interstimulus intervals varied randomly. Our results show that unpredictable stimulus timing abolishes the early part of the repetition positivity, an AEP indexing auditory sensory memory trace formation, while leaving the later part (≈ >200 ms) unaffected. This suggests that timing predictability aids the propagation of repetition effects upstream the auditory pathway, most likely from association auditory cortex (including the planum temporale) toward primary auditory cortex (Heschl's gyrus) and beyond, as judged by the timing of AEP latencies. This outcome calls for attention to stimulation timing in future experiments regarding sensory memory trace formation in AEP measures and stimulus probability encoding in animal models.

  5. Evaluate the viability of auditory steady state response testing for pseudohypacusic workers in the South African mining industry

    CSIR Research Space (South Africa)

    De Koker, E

    2003-07-01

    Full Text Available stream_source_info SIM020701Part 1summary.pdf.txt stream_content_type text/plain stream_size 5723 Content-Encoding UTF-8 stream_name SIM020701Part 1summary.pdf.txt Content-Type text/plain; charset=UTF-8 Safety in Mines... Research Advisory Committee Project Summary : SIM 02-07-01 Part 1 Project Title: Evaluate the viability of auditory steady state response testing for pseudohypacusic workers in the South African mining industry. (79 pages) Author(s): Elize...

  6. Arcturus stream : A case study

    Science.gov (United States)

    Ramya, P.; Reddy, Bacham Eswar

    Stellar streams are a group of gravitationally unbound stars which share same kinematic properties, and hence form coherent structures in the velocity space. Their origin is not clear. The concept of stellar streams or moving groups was introduced much early (Eggen 1958) and were thought as dispersed cluster remnants retaining the original kinematics. Subsequently, studies suggested that these are debris of accreted satellite galaxy in the Milkyway and belong to an old stellar population in the solar neighborhood. Kinematic studies reveal that the stream member stars are old and belong to thick disk of the Galaxy. Satellite acceretion scenario is one front runner proposal for the thick disk formation in the Galactic disk. In this study, we have explored one of the streams, known as Arcturus stream, through high resolution spectroscopy. Preliminary abundance results for a sample of Arcturus stream are obtained and compared with groups of stars that belong to thick disk and dwarf spheroidals. Alpha elements, that are known to be produced mainly in the massive but short lived SNII, seem to be enhanced relative to Fe, a dominant product in long lived SNIa. This suggests that the Arcturus stream stars are old and are mostly produced in the era where SNII was predominant. Abundance results are very similar to the results of Galactic thick disk, which is a distinct component in the disk, both kinematically and chemically. It seems Arcturus is a subgroup within the thick disk but to establish whether the group is distinct from the thick disk, we have to determine differential age estimate for a sample of thick disk and Arcturus stars at the overlapping [Fe/H].

  7. Analyzing the auditory scene: neurophysiologic evidence of a dissociation between detection of regularity and detection of change.

    Science.gov (United States)

    Pannese, Alessia; Herrmann, Christoph S; Sussman, Elyse

    2015-05-01

    Detecting regularity and change in the environment is crucial for survival, as it enables making predictions about the world and informing goal-directed behavior. In the auditory modality, the detection of regularity involves segregating incoming sounds into distinct perceptual objects (stream segregation). The detection of change from this within-stream regularity is associated with the mismatch negativity, a component of auditory event-related brain potentials (ERPs). A central unanswered question is how the detection of regularity and the detection of change are interrelated, and whether attention affects the former, the latter, or both. Here we show that the detection of regularity and the detection of change can be empirically dissociated, and that attention modulates the detection of change without precluding the detection of regularity, and the perceptual organization of the auditory background into distinct streams. By applying frequency spectra analysis on the EEG of subjects engaged in a selective listening task, we found distinct peaks of ERP synchronization, corresponding to the rhythm of the frequency streams, independently of whether the stream was attended or ignored. Our results provide direct neurophysiological evidence of regularity detection in the auditory background, and show that it can occur independently of change detection and in the absence of attention.

  8. Attentional demands influence vocal compensations to pitch errors heard in auditory feedback.

    Science.gov (United States)

    Tumber, Anupreet K; Scheerer, Nichole E; Jones, Jeffery A

    2014-01-01

    Auditory feedback is required to maintain fluent speech. At present, it is unclear how attention modulates auditory feedback processing during ongoing speech. In this event-related potential (ERP) study, participants vocalized/a/, while they heard their vocal pitch suddenly shifted downward a ½ semitone in both single and dual-task conditions. During the single-task condition participants passively viewed a visual stream for cues to start and stop vocalizing. In the dual-task condition, participants vocalized while they identified target stimuli in a visual stream of letters. The presentation rate of the visual stimuli was manipulated in the dual-task condition in order to produce a low, intermediate, and high attentional load. Visual target identification accuracy was lowest in the high attentional load condition, indicating that attentional load was successfully manipulated. Results further showed that participants who were exposed to the single-task condition, prior to the dual-task condition, produced larger vocal compensations during the single-task condition. Thus, when participants' attention was divided, less attention was available for the monitoring of their auditory feedback, resulting in smaller compensatory vocal responses. However, P1-N1-P2 ERP responses were not affected by divided attention, suggesting that the effect of attentional load was not on the auditory processing of pitch altered feedback, but instead it interfered with the integration of auditory and motor information, or motor control itself.

  9. Sadness increases distraction by auditory deviant stimuli.

    Science.gov (United States)

    Pacheco-Unguetti, Antonia P; Parmentier, Fabrice B R

    2014-02-01

    Research shows that attention is ineluctably captured away from a focal visual task by rare and unexpected changes (deviants) in an otherwise repeated stream of task-irrelevant auditory distractors (standards). The fundamental cognitive mechanisms underlying this effect have been the object of an increasing number of studies but their sensitivity to mood and emotions remains relatively unexplored despite suggestion of greater distractibility in negative emotional contexts. In this study, we examined the effect of sadness, a widespread form of emotional distress and a symptom of many disorders, on distraction by deviant sounds. Participants received either a sadness induction or a neutral mood induction by means of a mixed procedure based on music and autobiographical recall prior to taking part in an auditory-visual oddball task in which they categorized visual digits while ignoring task-irrelevant sounds. The results showed that although all participants exhibited significantly longer response times in the visual categorization task following the presentation of rare and unexpected deviant sounds relative to that of the standard sound, this distraction effect was significantly greater in participants who had received the sadness induction (a twofold increase). The residual distraction on the subsequent trial (postdeviance distraction) was equivalent in both groups, suggesting that sadness interfered with the disengagement of attention from the deviant sound and back toward the target stimulus. We propose that this disengagement impairment reflected the monopolization of cognitive resources by sadness and/or associated ruminations. Our findings suggest that sadness can increase distraction even when distractors are emotionally neutral. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm.

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W; Thornton, David; Bowers, Andrew L; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  11. [Auditory threshold for white noise].

    Science.gov (United States)

    Carrat, R; Thillier, J L; Durivault, J

    1975-01-01

    The liminal auditory threshold for white noise and for coloured noise was determined from a statistical survey of a group of 21 young people with normal hearing. The normal auditory threshold for white noise with a spectrum covering the whole of the auditory field is between -- 0.57 dB +/- 8.78. The normal auditory threshold for bands of filtered white noise (coloured noise with a central frequency corresponding to the pure frequencies usually employed in tonal audiometry) describes a typical curve which, instead of being homothetic to the usual tonal curves, sinks to low frequencies and then rises. The peak of this curve is replaced by a broad plateau ranging from 750 to 6000 Hz and contained in the concavity of the liminal tonal curves. The ear is therefore less sensitive but, at limited acoustic pressure, white noise first impinges with the same discrimination upon the whole of the conversational zone of the auditory field. Discovery of the audiometric threshold for white noise constitutes a synthetic method of measuring acuteness of hearing which considerably reduces the amount of manipulation required.

  12. StreamCat

    Data.gov (United States)

    U.S. Environmental Protection Agency — The StreamCat Dataset provides summaries of natural and anthropogenic landscape features for ~2.65 million streams, and their associated catchments, within the...

  13. Prioritized Contact Transport Stream

    Science.gov (United States)

    Hunt, Walter Lee, Jr. (Inventor)

    2015-01-01

    A detection process, contact recognition process, classification process, and identification process are applied to raw sensor data to produce an identified contact record set containing one or more identified contact records. A prioritization process is applied to the identified contact record set to assign a contact priority to each contact record in the identified contact record set. Data are removed from the contact records in the identified contact record set based on the contact priorities assigned to those contact records. A first contact stream is produced from the resulting contact records. The first contact stream is streamed in a contact transport stream. The contact transport stream may include and stream additional contact streams. The contact transport stream may be varied dynamically over time based on parameters such as available bandwidth, contact priority, presence/absence of contacts, system state, and configuration parameters.

  14. Transforming the vestibular system one molecule at a time: the molecular and developmental basis of vertebrate auditory evolution.

    Science.gov (United States)

    Duncan, Jeremy S; Fritzsch, Bernd

    2012-01-01

    We review the molecular basis of auditory development and evolution. We propose that the auditory periphery (basilar papilla, organ of Corti) evolved by transforming a newly created and redundant vestibular (gravistatic) endorgan into a sensory epithelium that could respond to sound instead of gravity. Evolution altered this new epithelia's mechanoreceptive properties through changes of hair cells, positioned the epithelium in a unique position near perilymphatic space to extract sound moving between the round and the oval window, and transformed its otolith covering into a tympanic membrane. Another important step in the evolution of an auditory system was the evolution of a unique set of "auditory neurons" that apparently evolved from vestibular neurons. Evolution of mammalian auditory (spiral ganglion) neurons coincides with GATA3 being a transcription factor found selectively in the auditory afferents. For the auditory information to be processed, the CNS required a dedicated center for auditory processing, the auditory nuclei. It is not known whether the auditory nucleus is ontogenetically related to the vestibular or electroreceptive nuclei, two sensory systems found in aquatic but not in amniotic vertebrates, or a de-novo formation of the rhombic lip in line with other novel hindbrain structures such as pontine nuclei. Like other novel hindbrain structures, the auditory nuclei express exclusively the bHLH gene Atoh1, and loss of Atoh1 results in loss of most of this nucleus in mice. Only after the basilar papilla, organ of Corti evolved could efferent neurons begin to modulate their activity. These auditory efferents most likely evolved from vestibular efferent neurons already present. The most simplistic interpretation of available data suggest that the ear, sensory neurons, auditory nucleus, and efferent neurons have been transformed by altering the developmental genetic modules necessary for their development into a novel direction conducive for sound

  15. Auditory scene analysis in school-aged children with developmental language disorders.

    Science.gov (United States)

    Sussman, E; Steinschneider, M; Lee, W; Lawson, K

    2015-02-01

    Natural sound environments are dynamic, with overlapping acoustic input originating from simultaneously active sources. A key function of the auditory system is to integrate sensory inputs that belong together and segregate those that come from different sources. We hypothesized that this skill is impaired in individuals with phonological processing difficulties. There is considerable disagreement about whether phonological impairments observed in children with developmental language disorders can be attributed to specific linguistic deficits or to more general acoustic processing deficits. However, most tests of general auditory abilities have been conducted with a single set of sounds. We assessed the ability of school-aged children (7-15 years) to parse complex auditory non-speech input, and determined whether the presence of phonological processing impairments was associated with stream perception performance. A key finding was that children with language impairments did not show the same developmental trajectory for stream perception as typically developing children. In addition, children with language impairments required larger frequency separations between sounds to hear distinct streams compared to age-matched peers. Furthermore, phonological processing ability was a significant predictor of stream perception measures, but only in the older age groups. No such association was found in the youngest children. These results indicate that children with language impairments have difficulty parsing speech streams, or identifying individual sound events when there are competing sound sources. We conclude that language group differences may in part reflect fundamental maturational disparities in the analysis of complex auditory scenes. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Auditory Cortex Tracks Both Auditory and Visual Stimulus Dynamics Using Low-Frequency Neuronal Phase Modulation

    Science.gov (United States)

    Luo, Huan; Liu, Zuxiang; Poeppel, David

    2010-01-01

    Integrating information across sensory domains to construct a unified representation of multi-sensory signals is a fundamental characteristic of perception in ecological contexts. One provocative hypothesis deriving from neurophysiology suggests that there exists early and direct cross-modal phase modulation. We provide evidence, based on magnetoencephalography (MEG) recordings from participants viewing audiovisual movies, that low-frequency neuronal information lies at the basis of the synergistic coordination of information across auditory and visual streams. In particular, the phase of the 2–7 Hz delta and theta band responses carries robust (in single trials) and usable information (for parsing the temporal structure) about stimulus dynamics in both sensory modalities concurrently. These experiments are the first to show in humans that a particular cortical mechanism, delta-theta phase modulation across early sensory areas, plays an important “active” role in continuously tracking naturalistic audio-visual streams, carrying dynamic multi-sensory information, and reflecting cross-sensory interaction in real time. PMID:20711473

  17. Devices and Procedures for Auditory Learning.

    Science.gov (United States)

    Ling, Daniel

    1986-01-01

    The article summarizes information on assistive devices (hearing aids, cochlear implants, tactile aids, visual aids) and rehabilitation procedures (auditory training, speechreading, cued speech, and speech production) to aid the auditory learning of the hearing impaired.(DB)

  18. Auditory adaptation improves tactile frequency perception

    NARCIS (Netherlands)

    Crommett, L.E.; Pérez Bellido, A.; Yau, J.M.

    2017-01-01

    Our ability to process temporal frequency information by touch underlies our capacity to perceive and discriminate surface textures. Auditory signals, which also provide extensive temporal frequency information, can systematically alter the perception of vibrations on the hand. How auditory signals

  19. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, R.D.A.; Ishihara, A.; Klop, J.W.

    2010-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called 'productive' if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  20. Productivity of Stream Definitions

    NARCIS (Netherlands)

    Endrullis, Jörg; Grabmayer, Clemens; Hendriks, Dimitri; Isihara, Ariya; Klop, Jan

    2007-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continuously in such a way that a uniquely determined stream is obtained as the limit. Whereas productivity is undecidable

  1. Productivity of stream definitions

    NARCIS (Netherlands)

    Endrullis, J.; Grabmayer, C.A.; Hendriks, D.; Isihara, A.; Klop, J.W.

    2008-01-01

    We give an algorithm for deciding productivity of a large and natural class of recursive stream definitions. A stream definition is called ‘productive’ if it can be evaluated continually in such a way that a uniquely determined stream in constructor normal form is obtained as the limit. Whereas

  2. Concept Formation Skills in Long-Term Cochlear Implant Users

    Science.gov (United States)

    Castellanos, Irina; Kronenberger, William G.; Beer, Jessica; Colson, Bethany G.; Henning, Shirley C.; Ditmars, Allison; Pisoni, David B.

    2015-01-01

    This study investigated if a period of auditory sensory deprivation followed by degraded auditory input and related language delays affects visual concept formation skills in long-term prelingually deaf cochlear implant (CI) users. We also examined if concept formation skills are mediated or moderated by other neurocognitive domains (i.e.,…

  3. Attentional modulation of auditory steady-state responses.

    Directory of Open Access Journals (Sweden)

    Yatin Mahajan

    Full Text Available Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR. The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence. The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  4. Attentional modulation of auditory steady-state responses.

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex.

  5. Attentional Modulation of Auditory Steady-State Responses

    Science.gov (United States)

    Mahajan, Yatin; Davis, Chris; Kim, Jeesun

    2014-01-01

    Auditory selective attention enables task-relevant auditory events to be enhanced and irrelevant ones suppressed. In the present study we used a frequency tagging paradigm to investigate the effects of attention on auditory steady state responses (ASSR). The ASSR was elicited by simultaneously presenting two different streams of white noise, amplitude modulated at either 16 and 23.5 Hz or 32.5 and 40 Hz. The two different frequencies were presented to each ear and participants were instructed to selectively attend to one ear or the other (confirmed by behavioral evidence). The results revealed that modulation of ASSR by selective attention depended on the modulation frequencies used and whether the activation was contralateral or ipsilateral. Attention enhanced the ASSR for contralateral activation from either ear for 16 Hz and suppressed the ASSR for ipsilateral activation for 16 Hz and 23.5 Hz. For modulation frequencies of 32.5 or 40 Hz attention did not affect the ASSR. We propose that the pattern of enhancement and inhibition may be due to binaural suppressive effects on ipsilateral stimulation and the dominance of contralateral hemisphere during dichotic listening. In addition to the influence of cortical processing asymmetries, these results may also reflect a bias towards inhibitory ipsilateral and excitatory contralateral activation present at the level of inferior colliculus. That the effect of attention was clearest for the lower modulation frequencies suggests that such effects are likely mediated by cortical brain structures or by those in close proximity to cortex. PMID:25334021

  6. Auditory presentation of experimental data

    Science.gov (United States)

    Lunney, David; Morrison, Robert C.

    1990-08-01

    Our research group has been working for several years on the development of auditory alternatives to visual graphs, primarily in order to give blind science students and scientists access to instrumental measurements. In the course of this work we have tried several modes for auditory presentation of data: synthetic speech, tones of varying pitch, complex waveforms, electronic music, and various non-musical sounds. Our most successful translation of data into sound has been presentation of infrared spectra as musical patterns. We have found that if the stick spectra of two compounds are visibly different, their musical patterns will be audibly different. Other possibilities for auditory presentation of data are also described, among them listening to Fourier transforms of spectra, and encoding data in complex waveforms (including synthetic speech).

  7. Context effects on auditory distraction

    Science.gov (United States)

    Chen, Sufen; Sussman, Elyse S.

    2014-01-01

    The purpose of the study was to test the hypothesis that sound context modulates the magnitude of auditory distraction, indexed by behavioral and electrophysiological measures. Participants were asked to identify tone duration, while irrelevant changes occurred in tone frequency, tone intensity, and harmonic structure. Frequency deviants were randomly intermixed with standards (Uni-Condition), with intensity deviants (Bi-Condition), and with both intensity and complex deviants (Tri-Condition). Only in the Tri-Condition did the auditory distraction effect reflect the magnitude difference among the frequency and intensity deviants. The mixture of the different types of deviants in the Tri-Condition modulated the perceived level of distraction, demonstrating that the sound context can modulate the effect of deviance level on processing irrelevant acoustic changes in the environment. These findings thus indicate that perceptual contrast plays a role in change detection processes that leads to auditory distraction. PMID:23886958

  8. Auditory Hallucinations in Acute Stroke

    Directory of Open Access Journals (Sweden)

    Yair Lampl

    2005-01-01

    Full Text Available Auditory hallucinations are uncommon phenomena which can be directly caused by acute stroke, mostly described after lesions of the brain stem, very rarely reported after cortical strokes. The purpose of this study is to determine the frequency of this phenomenon. In a cross sectional study, 641 stroke patients were followed in the period between 1996–2000. Each patient underwent comprehensive investigation and follow-up. Four patients were found to have post cortical stroke auditory hallucinations. All of them occurred after an ischemic lesion of the right temporal lobe. After no more than four months, all patients were symptom-free and without therapy. The fact the auditory hallucinations may be of cortical origin must be taken into consideration in the treatment of stroke patients. The phenomenon may be completely reversible after a couple of months.

  9. Octave effect in auditory attention.

    Science.gov (United States)

    Borra, Tobias; Versnel, Huib; Kemner, Chantal; van Opstal, A John; van Ee, Raymond

    2013-09-17

    After hearing a tone, the human auditory system becomes more sensitive to similar tones than to other tones. Current auditory models explain this phenomenon by a simple bandpass attention filter. Here, we demonstrate that auditory attention involves multiple pass-bands around octave-related frequencies above and below the cued tone. Intriguingly, this "octave effect" not only occurs for physically presented tones, but even persists for the missing fundamental in complex tones, and for imagined tones. Our results suggest neural interactions combining octave-related frequencies, likely located in nonprimary cortical regions. We speculate that this connectivity scheme evolved from exposure to natural vibrations containing octave-related spectral peaks, e.g., as produced by vocal cords.

  10. Processing temporal modulations in binaural and monaural auditory stimuli by neurons in the inferior colliculus and auditory cortex.

    Science.gov (United States)

    Fitzpatrick, Douglas C; Roberts, Jason M; Kuwada, Shigeyuki; Kim, Duck O; Filipovic, Blagoje

    2009-12-01

    Processing dynamic changes in the stimulus stream is a major task for sensory systems. In the auditory system, an increase in the temporal integration window between the inferior colliculus (IC) and auditory cortex is well known for monaural signals such as amplitude modulation, but a similar increase with binaural signals has not been demonstrated. To examine the limits of binaural temporal processing at these brain levels, we used the binaural beat stimulus, which causes a fluctuating interaural phase difference, while recording from neurons in the unanesthetized rabbit. We found that the cutoff frequency for neural synchronization to the binaural beat frequency (BBF) decreased between the IC and auditory cortex, and that this decrease was associated with an increase in the group delay. These features indicate that there is an increased temporal integration window in the cortex compared to the IC, complementing that seen with monaural signals. Comparable measurements of responses to amplitude modulation showed that the monaural and binaural temporal integration windows at the cortical level were quantitatively as well as qualitatively similar, suggesting that intrinsic membrane properties and afferent synapses to the cortical neurons govern the dynamic processing. The upper limits of synchronization to the BBF and the band-pass tuning characteristics of cortical neurons are a close match to human psychophysics.

  11. MEKANISME SEGMENTASI LAJU BIT PADA DYNAMIC ADAPTIVE STREAMING OVER HTTP (DASH UNTUK APLIKASI VIDEO STREAMING

    Directory of Open Access Journals (Sweden)

    Muhammad Audy Bazly

    2015-12-01

    Full Text Available This paper aims to analyze Internet-based streaming video service in the communication media with variable bit rates. The proposed scheme on Dynamic Adaptive Streaming over HTTP (DASH using the internet network that adapts to the protocol Hyper Text Transfer Protocol (HTTP. DASH technology allows a video in the video segmentation into several packages that will distreamingkan. DASH initial stage is to compress the video source to lower the bit rate video codec uses H.26. Video compressed further in the segmentation using MP4Box generates streaming packets with the specified duration. These packages are assembled into packets in a streaming media format Presentation Description (MPD or known as MPEG-DASH. Streaming video format MPEG-DASH run on a platform with the player bitdash teritegrasi bitcoin. With this scheme, the video will have several variants of the bit rates that gave rise to the concept of scalability of streaming video services on the client side. The main target of the mechanism is smooth the MPEG-DASH streaming video display on the client. The simulation results show that the scheme based scalable video streaming MPEG- DASH able to improve the quality of image display on the client side, where the procedure bufering videos can be made constant and fine for the duration of video views

  12. Auralization of CFD Vorticity Using an Auditory Illusion

    Science.gov (United States)

    Volpe, C. R.

    2005-12-01

    One way in which scientists and engineers interpret large quantities of data is through a process called visualization, i.e. generating graphical images that capture essential characteristics and highlight interesting relationships. Another approach, which has received far less attention, is to present complex information with sound. This approach, called ``auralization" or ``sonification", is the auditory analog of visualization. Early work in data auralization frequently involved directly mapping some variable in the data to a sound parameter, such as pitch or volume. Multi-variate data could be auralized by mapping several variables to several sound parameters simultaneously. A clear drawback of this approach is the limited practical range of sound parameters that can be presented to human listeners without exceeding their range of perception or comfort. A software auralization system built upon an existing visualization system is briefly described. This system incorporates an aural presentation synchronously and interactively with an animated scientific visualization, so that alternate auralization techniques can be investigated. One such alternate technique involves auditory illusions: sounds which trick the listener into perceiving something other than what is actually being presented. This software system will be used to present an auditory illusion, known for decades among cognitive psychologists, which produces a sound that seems to ascend or descend endlessly in pitch. The applicability of this illusion for presenting Computational Fluid Dynamics data will be demonstrated. CFD data is frequently visualized with thin stream-lines, but thicker stream-ribbons and stream-tubes can also be used, which rotate to convey fluid vorticity. But a purely graphical presentation can yield drawbacks of its own. Thicker stream-tubes can be self-obscuring, and can obscure other scene elements as well, thus motivating a different approach, such as using sound. Naturally

  13. Early auditory enrichment with music enhances auditory discrimination learning and alters NR2B protein expression in rat auditory cortex.

    Science.gov (United States)

    Xu, Jinghong; Yu, Liping; Cai, Rui; Zhang, Jiping; Sun, Xinde

    2009-01-03

    Previous studies have shown that the functional development of auditory system is substantially influenced by the structure of environmental acoustic inputs in early life. In our present study, we investigated the effects of early auditory enrichment with music on rat auditory discrimination learning. We found that early auditory enrichment with music from postnatal day (PND) 14 enhanced learning ability in auditory signal-detection task and in sound duration-discrimination task. In parallel, a significant increase was noted in NMDA receptor subunit NR2B protein expression in the auditory cortex. Furthermore, we found that auditory enrichment with music starting from PND 28 or 56 did not influence NR2B expression in the auditory cortex. No difference was found in the NR2B expression in the inferior colliculus (IC) between music-exposed and normal rats, regardless of when the auditory enrichment with music was initiated. Our findings suggest that early auditory enrichment with music influences NMDA-mediated neural plasticity, which results in enhanced auditory discrimination learning.

  14. Benthic invertebrate fauna, small streams

    Science.gov (United States)

    J. Bruce Wallace; S.L. Eggert

    2009-01-01

    Small streams (first- through third-order streams) make up >98% of the total number of stream segments and >86% of stream length in many drainage networks. Small streams occur over a wide array of climates, geology, and biomes, which influence temperature, hydrologic regimes, water chemistry, light, substrate, stream permanence, a basin's terrestrial plant...

  15. When a photograph can be heard: vision activates the auditory cortex within 110 ms.

    Science.gov (United States)

    Proverbio, Alice Mado; D'Aniello, Guido Edoardo; Adorni, Roberta; Zani, Alberto

    2011-01-01

    As the makers of silent movies knew well, it is not necessary to provide an actual auditory stimulus to activate the sensation of sounds typically associated with what we are viewing. Thus, you could almost hear the neigh of Rodolfo Valentino's horse, even though the film was mute. Evidence is provided that the mere sight of a photograph associated with a sound can activate the associative auditory cortex. High-density ERPs were recorded in 15 participants while they viewed hundreds of perceptually matched images that were associated (or not) with a given sound. Sound stimuli were discriminated from non-sound stimuli as early as 110 ms. SwLORETA reconstructions showed common activation of ventral stream areas for both types of stimuli and of the associative temporal cortex, at the earliest stage, only for sound stimuli. The primary auditory cortex (BA41) was also activated by sound images after approximately 200 ms.

  16. Penetrating multichannel stimulation and recording electrodes in auditory prosthesis research.

    Science.gov (United States)

    Anderson, David J

    2008-08-01

    Microelectrode arrays offer the auditory systems physiologists many opportunities through a number of electrode technologies. In particular, silicon substrate electrode arrays offer a large design space including choice of layout plan, range of surface areas for active sites, a choice of site materials and high spatial resolution. Further, most designs can double as recording and stimulation electrodes in the same preparation. Scala tympani auditory prosthesis research has been aided by mapping electrodes in the cortex and the inferior colliculus to assess the CNS responses to peripheral stimulation. More recently silicon stimulation electrodes placed in the auditory nerve, cochlear nucleus and the inferior colliculus have advanced the exploration of alternative stimulation sites for auditory prostheses. Multiplication of results from experimental effort by simultaneously stimulating several locations, or by acquiring several streams of data synchronized to the same stimulation event, is a commonly sought after advantage. Examples of inherently multichannel functions which are not possible with single electrode sites include (1) current steering resulting in more focused stimulation, (2) improved signal-to-noise ratio (SNR) for recording when noise and/or neural signals appear on more than one site and (3) current source density (CSD) measurements. Still more powerful are methods that exploit closely-spaced recording and stimulation sites to improve detailed interrogation of the surrounding neural domain. Here, we discuss thin-film recording/stimulation arrays on silicon substrates. These electrode arrays have been shown to be valuable because of their precision coupled with reproducibility in an ever expanding design space. The shape of the electrode substrate can be customized to accommodate use in cortical, deep and peripheral neural structures while flexible cables, fluid delivery and novel coatings have been added to broaden their application. The use of

  17. Auditory Hallucinations Nomenclature and Classification

    NARCIS (Netherlands)

    Blom, Jan Dirk; Sommer, Iris E. C.

    Introduction: The literature on the possible neurobiologic correlates of auditory hallucinations is expanding rapidly. For an adequate understanding and linking of this emerging knowledge, a clear and uniform nomenclature is a prerequisite. The primary purpose of the present article is to provide an

  18. Auditory Risk of Air Rifles

    Science.gov (United States)

    Lankford, James E.; Meinke, Deanna K.; Flamme, Gregory A.; Finan, Donald S.; Stewart, Michael; Tasko, Stephen; Murphy, William J.

    2016-01-01

    Objective To characterize the impulse noise exposure and auditory risk for air rifle users for both youth and adults. Design Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit and LAeq75 exposure limit suggested by the World Health Organization (1999) for children. Study sample Impulses were generated by 9 pellet air rifles and 1 BB air rifle. Results None of the air rifles generated peak levels that exceeded the 140 dB peak limit for adults and 8 (80%) exceeded the 120 dB peak SPL limit for youth. In general, for both adults and youth there is minimal auditory risk when shooting less than 100 unprotected shots with pellet air rifles. Air rifles with suppressors were less hazardous than those without suppressors and the pellet air rifles with higher velocities were generally more hazardous than those with lower velocities. Conclusion To minimize auditory risk, youth should utilize air rifles with an integrated suppressor and lower velocity ratings. Air rifle shooters are advised to wear hearing protection whenever engaging in shooting activities in order to gain self-efficacy and model appropriate hearing health behaviors necessary for recreational firearm use. PMID:26840923

  19. Molecular approach of auditory neuropathy.

    Science.gov (United States)

    Silva, Magali Aparecida Orate Menezes da; Piatto, Vânia Belintani; Maniglia, Jose Victor

    2015-01-01

    Mutations in the otoferlin gene are responsible for auditory neuropathy. To investigate the prevalence of mutations in the mutations in the otoferlin gene in patients with and without auditory neuropathy. This original cross-sectional case study evaluated 16 index cases with auditory neuropathy, 13 patients with sensorineural hearing loss, and 20 normal-hearing subjects. DNA was extracted from peripheral blood leukocytes, and the mutations in the otoferlin gene sites were amplified by polymerase chain reaction/restriction fragment length polymorphism. The 16 index cases included nine (56%) females and seven (44%) males. The 13 deaf patients comprised seven (54%) males and six (46%) females. Among the 20 normal-hearing subjects, 13 (65%) were males and seven were (35%) females. Thirteen (81%) index cases had wild-type genotype (AA) and three (19%) had the heterozygous AG genotype for IVS8-2A-G (intron 8) mutation. The 5473C-G (exon 44) mutation was found in a heterozygous state (CG) in seven (44%) index cases and nine (56%) had the wild-type allele (CC). Of these mutants, two (25%) were compound heterozygotes for the mutations found in intron 8 and exon 44. All patients with sensorineural hearing loss and normal-hearing individuals did not have mutations (100%). There are differences at the molecular level in patients with and without auditory neuropathy. Copyright © 2015 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  20. Nigel: A Severe Auditory Dyslexic

    Science.gov (United States)

    Cotterell, Gill

    1976-01-01

    Reported is the case study of a boy with severe auditory dyslexia who received remedial treatment from the age of four and progressed through courses at a technical college and a 3-year apprenticeship course in mechanics by the age of eighteen. (IM)

  1. Auditory Processing Disorder in Children

    Science.gov (United States)

    ... Auditory Neuropathy Autism Spectrum Disorder: Communication Problems in Children Dysphagia Quick Statistics About Voice, Speech, Language Speech and Language Developmental Milestones What Is Voice? What Is Speech? What Is Language? ... communication provides better outcomes for children with cochlear implants University of Texas at Dallas ...

  2. Finding the missing stimulus mismatch negativity (MMN): Emitted MMN to violations of an auditory gestalt

    Science.gov (United States)

    Salisbury, Dean F

    2011-01-01

    Deviations from repetitive auditory stimuli evoke a mismatch negativity (MMN). Counter-intuitively, omissions of repetitive stimuli do not. Violations of patterns reflecting complex rules also evoke MMN. To detect a MMN to missing stimuli, we developed an auditory gestalt task using one stimulus. Groups of 6 pips (50 msec duration, 330 msec stimulus onset asynchrony (SOA), 400 trials), were presented with an inter-trial interval (ITI) of 750 msec while subjects (n=16) watched a silent video. Occasional deviant groups had missing 4th or 6th tones (50 trials each). Missing stimuli evoked a MMN (pgestalt grouping rule. Homogenous stimulus streams appear to differ in the relative weighting of omissions than strongly patterned streams. PMID:22221004

  3. Dual processing streams in chemosensory perception

    Directory of Open Access Journals (Sweden)

    Johannes eFrasnelli

    2012-10-01

    Full Text Available Higher order sensory processing follows a general subdivision into a ventral and a dorsal stream for visual, auditory, and tactile information. Object identification is processed in temporal structures (ventral stream, whereas object localization leads to activation of parietal structures (dorsal stream. To examine whether the chemical senses demonstrate a similar dissociation, we investigated odor identification and odor localization in 16 healthy young subjects using functional MRI. We used two odors (1. eucalyptol; 2. a mixture of phenylethanol and carbon dioxide which were delivered to only one nostril. During odor identification subjects had to recognize the odor; during odor localisation they had to detect the stimulated nostril.We used General Linear Model (GLM as a classical method as well as Independent Component Analysis (ICA in order to investigate a possible neuroanatomical dissociation between both tasks. Both methods showed differences between tasks - confirming a dual processing stream in the chemical senses - but revealed complementary results. Specifically, GLM identified the left intraparietal sulcus and the right superior frontal sulcus to be more activated when subjects were localising the odorants. For the same task, ICA identified a significant cluster in the left parietal lobe (paracentral lobule but also in the right hippocampus. While GLM did not find significant activations for odor identification, ICA revealed two clusters (in the left central fissure and the left superior frontal gyrus for this task. These data demonstrate that higher order chemosensory processing shares the general subdivision into a ventral and a dorsal processing stream with other sensory systems and suggest that this is a global principle, independent of sensory channels.

  4. Auditory Cortical Deactivation during Speech Production and following Speech Perception: An EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Directory of Open Access Journals (Sweden)

    David E Jenson

    2015-10-01

    Full Text Available Sensorimotor integration within the dorsal stream enables online monitoring of speech. Jenson et al. (2014 used independent component analysis (ICA and event related spectral perturbation (ERSP analysis of EEG data to describe anterior sensorimotor (e.g., premotor cortex; PMC activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory regions of the dorsal stream in the same tasks. Perception tasks required ‘active’ discrimination of syllable pairs (/ba/ and /da/ in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral ‘auditory’ alpha (α components in 15 of 29 participants localized to pSTG (left and pMTG (right. ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < .05 concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions also temporally aligned with PMC activity reported in Jenson et al. (2014. These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.

  5. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm

    Science.gov (United States)

    Jenson, David; Harkrider, Ashley W.; Thornton, David; Bowers, Andrew L.; Saltuklaroglu, Tim

    2015-01-01

    Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required “active” discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral “auditory” alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique. PMID:26500519

  6. StreamThermal: A software package for calculating thermal metrics from stream temperature data

    Science.gov (United States)

    Tsang, Yin-Phan; Infante, Dana M.; Stewart, Jana S.; Wang, Lizhu; Tingly, Ralph; Thornbrugh, Darren; Cooper, Arthur; Wesley, Daniel

    2016-01-01

    Improving quality and better availability of continuous stream temperature data allows natural resource managers, particularly in fisheries, to understand associations between different characteristics of stream thermal regimes and stream fishes. However, there is no convenient tool to efficiently characterize multiple metrics reflecting stream thermal regimes with the increasing amount of data. This article describes a software program packaged as a library in R to facilitate this process. With this freely-available package, users will be able to quickly summarize metrics that describe five categories of stream thermal regimes: magnitude, variability, frequency, timing, and rate of change. The installation and usage instruction of this package, the definition of calculated thermal metrics, as well as the output format from the package are described, along with an application showing the utility for multiple metrics. We believe this package can be widely utilized by interested stakeholders and greatly assist more studies in fisheries.

  7. Acoustic streaming of a sharp edge.

    Science.gov (United States)

    Ovchinnikov, Mikhail; Zhou, Jianbo; Yalamanchili, Satish

    2014-07-01

    Anomalous acoustic streaming is observed emanating from sharp edges of solid bodies that are vibrating in fluids. The streaming velocities can be orders of magnitude higher than expected from the Rayleigh streaming at similar amplitudes of vibration. Acoustic velocity of fluid relative to a solid body diverges at a sharp edge, giving rise to a localized time-independent body force acting on the fluid. This force results in a formation of a localized jet. Two-dimensional numerical simulations are performed to predict acoustic streaming for low amplitude vibration using two methods: (1) Steady-state solution utilizing perturbation theory and (2) direct transient solution of the Navier-Stokes equations. Both analyses agree with each other and correctly predict the streaming of a sharp-edged vibrating blade measured experimentally. The origin of the streaming can be attributed to the centrifugal force of the acoustic fluid flow around a sharp edge. The dependence of this acoustic streaming on frequency and velocity is examined using dimensional analysis. The dependence law is devised and confirmed by numerical simulations.

  8. Comparison of auditory deficits associated with neglect and auditory cortex lesions.

    Science.gov (United States)

    Gutschalk, Alexander; Brandt, Tobias; Bartsch, Andreas; Jansen, Claudia

    2012-04-01

    In contrast to lesions of the visual and somatosensory cortex, lesions of the auditory cortex are not associated with self-evident contralesional deficits. Only when two or more stimuli are presented simultaneously to the left and right, contralesional extinction has been observed after unilateral lesions of the auditory cortex. Because auditory extinction is also considered a sign of neglect, clinical separation of auditory neglect from deficits caused by lesions of the auditory cortex is challenging. Here, we directly compared a number of tests previously used for either auditory-cortex lesions or neglect in 29 controls and 27 patients suffering from unilateral auditory-cortex lesions, neglect, or both. The results showed that a dichotic-speech test revealed similar amounts of extinction for both auditory cortex lesions and neglect. Similar results were obtained for words lateralized by inter-aural time differences. Consistent extinction after auditory cortex lesions was also observed in a dichotic detection task. Neglect patients showed more general problems with target detection but no consistent extinction in the dichotic detection task. In contrast, auditory lateralization perception was biased toward the right in neglect but showed considerably less disruption by auditory cortex lesions. Lateralization of auditory-evoked magnetic fields in auditory cortex was highly correlated with extinction in the dichotic target-detection task. Moreover, activity in the right primary auditory cortex was somewhat reduced in neglect patients. The results confirm that auditory extinction is observed with lesions of the auditory cortex and auditory neglect. A distinction can nevertheless be made with dichotic target-detection tasks, auditory-lateralization perception, and magnetoencephalography. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Auditory event files: integrating auditory perception and action planning.

    Science.gov (United States)

    Zmigrod, Sharon; Hommel, Bernhard

    2009-02-01

    The features of perceived objects are processed in distinct neural pathways, which call for mechanisms that integrate the distributed information into coherent representations (the binding problem). Recent studies of sequential effects have demonstrated feature binding not only in perception, but also across (visual) perception and action planning. We investigated whether comparable effects can be obtained in and across auditory perception and action. The results from two experiments revealed effects indicative of spontaneous integration of auditory features (pitch and loudness, pitch and location), as well as evidence for audio-manual stimulus-response integration. Even though integration takes place spontaneously, features related to task-relevant stimulus or response dimensions are more likely to be integrated. Moreover, integration seems to follow a temporal overlap principle, with features coded close in time being more likely to be bound together. Taken altogether, the findings are consistent with the idea of episodic event files integrating perception and action plans.

  10. Depersonalization as a mediator in the relationship between self-focused attention and auditory hallucinations.

    Science.gov (United States)

    Perona-Garcelán, Salvador; Carrascoso-López, Francisco; García-Montes, José M; Vallina-Fernández, Oscar; Pérez-Álvarez, Marino; Ductor-Recuerda, María Jesús; Salas-Azcona, Rosario; Cuevas-Yust, Carlos; Gómez-Gómez, María Teresa

    2011-01-01

    The purpose of this work was to study the potentially mediating role of certain dissociative factors, such as depersonalization, between self-focused attention and auditory hallucinations. A total of 59 patients diagnosed with schizophrenic disorder completed a self-focused attention scale ( M. F. Scheier & C. S. Carver, 1985 ), the Cambridge Depersonalization Scale (M. Sierra & G. E. Berrios, 2000), and the hallucination and delusion items on the Positive and Negative Syndrome Scale (S. R. Kay, L. A. Opler, & J. P. Lindenmayer, 1988). The results showed that self-focused attention correlated positively with auditory hallucinations, with delusions, and with depersonalization. It was also demonstrated that depersonalization has a mediating role between self-focused attention and auditory hallucinations but not delusions. In the discussion, the importance of dissociative processes in understanding the formation and maintenance of auditory hallucinations is suggested.

  11. Security Issues in Streaming Server for Mobile Devices Development

    OpenAIRE

    Dan Barbu

    2011-01-01

    The paper presents a solution for streaming audio and video content in IP networks using RTP and SIP protocols. Second Section presents multimedia format and compression for the audio content that is streamed by SS4MD. Streaming protocols are shown in third section. In the forth section there is an example of an application which does uses all above. Conclusions are contoured in the final chapter.

  12. The process of auditory distraction: disrupted attention and impaired recall in a simulated lecture environment.

    Science.gov (United States)

    Zeamer, Charlotte; Fox Tree, Jean E

    2013-09-01

    Literature on auditory distraction has generally focused on the effects of particular kinds of sounds on attention to target stimuli. In support of extensive previous findings that have demonstrated the special role of language as an auditory distractor, we found that a concurrent speech stream impaired recall of a short lecture, especially for verbatim language. But impaired recall effects were also found with a variety of nonlinguistic noises, suggesting that neither type of noise nor amplitude and duration of noise are adequate predictors of distraction. Rather, distraction occurred when it was difficult for a listener to process sounds and assemble coherent, differentiable streams of input, one task-salient and attended and the other task-irrelevant and inhibited. In 3 experiments, the effects of auditory distractors during a short spoken lecture were tested. Participants recalled details of the lecture and also reported their opinions of the sound quality. Our findings suggest that distractors that are difficult to designate as either task related or environment related (and therefore irrelevant) draw cognitive processing resources away from a target speech stream during a listening task, impairing recall. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  13. Hydrography - Streams and Shorelines

    Data.gov (United States)

    California Department of Resources — The hydrography layer consists of flowing waters (rivers and streams), standing waters (lakes and ponds), and wetlands -- both natural and manmade. Two separate...

  14. User aware video streaming

    Science.gov (United States)

    Kerofsky, Louis; Jagannath, Abhijith; Reznik, Yuriy

    2015-03-01

    We describe the design of a video streaming system using adaptation to viewing conditions to reduce the bitrate needed for delivery of video content. A visual model is used to determine sufficient resolution needed under various viewing conditions. Sensors on a mobile device estimate properties of the viewing conditions, particularly the distance to the viewer. We leverage the framework of existing adaptive bitrate streaming systems such as HLS, Smooth Streaming or MPEG-DASH. The client rate selection logic is modified to include a sufficient resolution computed using the visual model and the estimated viewing conditions. Our experiments demonstrate significant bitrate savings compare to conventional streaming methods which do not exploit viewing conditions.

  15. The case against streaming

    National Research Council Canada - National Science Library

    Natalia Mironova

    2014-01-01

    .... Cassidy, the safety coordinator at the Airline Pilots Association, says Levine and others advocating for live data streaming are oversimplifying the issue and overlooking the logistical concerns...

  16. The auditory brainstem is a barometer of rapid auditory learning.

    Science.gov (United States)

    Skoe, E; Krizman, J; Spitzer, E; Kraus, N

    2013-07-23

    To capture patterns in the environment, neurons in the auditory brainstem rapidly alter their firing based on the statistical properties of the soundscape. How this neural sensitivity relates to behavior is unclear. We tackled this question by combining neural and behavioral measures of statistical learning, a general-purpose learning mechanism governing many complex behaviors including language acquisition. We recorded complex auditory brainstem responses (cABRs) while human adults implicitly learned to segment patterns embedded in an uninterrupted sound sequence based on their statistical characteristics. The brainstem's sensitivity to statistical structure was measured as the change in the cABR between a patterned and a pseudo-randomized sequence composed from the same set of sounds but differing in their sound-to-sound probabilities. Using this methodology, we provide the first demonstration that behavioral-indices of rapid learning relate to individual differences in brainstem physiology. We found that neural sensitivity to statistical structure manifested along a continuum, from adaptation to enhancement, where cABR enhancement (patterned>pseudo-random) tracked with greater rapid statistical learning than adaptation. Short- and long-term auditory experiences (days to years) are known to promote brainstem plasticity and here we provide a conceptual advance by showing that the brainstem is also integral to rapid learning occurring over minutes. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.

  17. Conceptual priming for realistic auditory scenes and for auditory words.

    Science.gov (United States)

    Frey, Aline; Aramaki, Mitsuko; Besson, Mireille

    2014-02-01

    Two experiments were conducted using both behavioral and Event-Related brain Potentials methods to examine conceptual priming effects for realistic auditory scenes and for auditory words. Prime and target sounds were presented in four stimulus combinations: Sound-Sound, Word-Sound, Sound-Word and Word-Word. Within each combination, targets were conceptually related to the prime, unrelated or ambiguous. In Experiment 1, participants were asked to judge whether the primes and targets fit together (explicit task) and in Experiment 2 they had to decide whether the target was typical or ambiguous (implicit task). In both experiments and in the four stimulus combinations, reaction times and/or error rates were longer/higher and the N400 component was larger to ambiguous targets than to conceptually related targets, thereby pointing to a common conceptual system for processing auditory scenes and linguistic stimuli in both explicit and implicit tasks. However, fine-grained analyses also revealed some differences between experiments and conditions in scalp topography and duration of the priming effects possibly reflecting differences in the integration of perceptual and cognitive attributes of linguistic and nonlinguistic sounds. These results have clear implications for the building-up of virtual environments that need to convey meaning without words. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. Visual Input Enhances Selective Speech Envelope Tracking in Auditory Cortex at a ‘Cocktail Party’

    Science.gov (United States)

    Golumbic, Elana Zion; Cogan, Gregory B.; Schroeder, Charles E.; Poeppel, David

    2013-01-01

    Our ability to selectively attend to one auditory signal amidst competing input streams, epitomized by the ‘Cocktail Party’ problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared to responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic (MEG) signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker’s face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a ‘Cocktail Party’ setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive. PMID:23345218

  19. Syntactic and auditory spatial processing in the human temporal cortex: an MEG study.

    Science.gov (United States)

    Herrmann, Björn; Maess, Burkhard; Hahne, Anja; Schröger, Erich; Friederici, Angela D

    2011-07-15

    Processing syntax is believed to be a higher cognitive function involving cortical regions outside sensory cortices. In particular, previous studies revealed that early syntactic processes at around 100-200 ms affect brain activations in anterior regions of the superior temporal gyrus (STG), while independent studies showed that pure auditory perceptual processing is related to sensory cortex activations. However, syntax-related modulations of sensory cortices were reported recently, thereby adding diverging findings to the previous studies. The goal of the present magnetoencephalography study was to localize the cortical regions underlying early syntactic processes and those underlying perceptual processes using a within-subject design. Sentences varying the factors syntax (correct vs. incorrect) and auditory space (standard vs. change of interaural time difference (ITD)) were auditorily presented. Both syntactic and auditory spatial anomalies led to very early activations (40-90 ms) in the STG. Around 135 ms after violation onset, differential effects were observed for syntax and auditory space, with syntactically incorrect sentences leading to activations in the anterior STG, whereas ITD changes elicited activations more posterior in the STG. Furthermore, our observations strongly indicate that the anterior and the posterior STG are activated simultaneously when a double violation is encountered. Thus, the present findings provide evidence of a dissociation of speech-related processes in the anterior STG and the processing of auditory spatial information in the posterior STG, compatible with the view of different processing streams in the temporal cortex. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Music training relates to the development of neural mechanisms of selective auditory attention

    Directory of Open Access Journals (Sweden)

    Dana L. Strait

    2015-04-01

    Full Text Available Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not.

  1. Music training relates to the development of neural mechanisms of selective auditory attention.

    Science.gov (United States)

    Strait, Dana L; Slater, Jessica; O'Connell, Samantha; Kraus, Nina

    2015-04-01

    Selective attention decreases trial-to-trial variability in cortical auditory-evoked activity. This effect increases over the course of maturation, potentially reflecting the gradual development of selective attention and inhibitory control. Work in adults indicates that music training may alter the development of this neural response characteristic, especially over brain regions associated with executive control: in adult musicians, attention decreases variability in auditory-evoked responses recorded over prefrontal cortex to a greater extent than in nonmusicians. We aimed to determine whether this musician-associated effect emerges during childhood, when selective attention and inhibitory control are under development. We compared cortical auditory-evoked variability to attended and ignored speech streams in musicians and nonmusicians across three age groups: preschoolers, school-aged children and young adults. Results reveal that childhood music training is associated with reduced auditory-evoked response variability recorded over prefrontal cortex during selective auditory attention in school-aged child and adult musicians. Preschoolers, on the other hand, demonstrate no impact of selective attention on cortical response variability and no musician distinctions. This finding is consistent with the gradual emergence of attention during this period and may suggest no pre-existing differences in this attention-related cortical metric between children who undergo music training and those who do not. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Groundtruthing and potential for predicting acid deposition impacts in headwater streams using bedrock geology, GIS, angling, and stream chemistry.

    Science.gov (United States)

    Kirby, C S; McInerney, B; Turner, M D

    2008-04-15

    Atmospheric acid deposition is of environmental concern worldwide, and the determination of impacts in remote areas can be problematic. Rainwater in central Pennsylvania, USA, has a mean pH of approximately 4.4. Bedrock varies dramatically in its ability to neutralize acidity. A GIS database simplified reconnaissance of non-carbonate bedrock streams in the Valley and Ridge Province and identified potentially chronically impacted headwater streams, which were sampled for chemistry and brook trout. Stream sites (n=26) that originate in and flow through the Tuscarora had a median pH of 5.0 that was significantly different from other formations. Shawangunk streams (n=6) and non-Tuscarora streams (n=20) had a median pH of 6.0 and 6.3, respectively. Mean alkalinity for non-Tuscarora streams (2.6 mg/L CaCO(3)) was higher than the mean for Tuscarora streams (0.5 mg/L). Lower pH and alkalinity suggest that the buffering capability of the Tuscarora is inferior to that of adjacent sandstones. Dissolved aluminum concentrations were much higher for Tuscarora streams (0.2 mg/L; approximately the lethal limit for brook trout) than for non-Tuscarora streams (0.03 mg/L) or Shawangunk streams (0.02 mg/L). Hook-and-line methods determined the presence/absence of brook trout in 47 stream reaches with suitable habitat. Brook trout were observed in 21 of 22 non-Tuscarora streams, all 6 Shawangunk streams, and only 9 of 28 Tuscarora stream sites. Carefully-designed hook-and-line sampling can determine the presence or absence of brook trout and help confirm biological impacts of acid deposition. 15% of 334 km of Tuscarora stream lengths are listed as "impaired" due to atmospheric deposition by the Pennsylvania Department of Environmental Protection. 65% of the 101 km of Tuscarora stream lengths examined in this study were impaired.

  3. Adaptation in the auditory system: an overview

    Directory of Open Access Journals (Sweden)

    David ePérez-González

    2014-02-01

    Full Text Available The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.

  4. Auditory Dysfunction in Patients with Cerebrovascular Disease

    Directory of Open Access Journals (Sweden)

    Sadaharu Tabuchi

    2014-01-01

    Full Text Available Auditory dysfunction is a common clinical symptom that can induce profound effects on the quality of life of those affected. Cerebrovascular disease (CVD is the most prevalent neurological disorder today, but it has generally been considered a rare cause of auditory dysfunction. However, a substantial proportion of patients with stroke might have auditory dysfunction that has been underestimated due to difficulties with evaluation. The present study reviews relationships between auditory dysfunction and types of CVD including cerebral infarction, intracerebral hemorrhage, subarachnoid hemorrhage, cerebrovascular malformation, moyamoya disease, and superficial siderosis. Recent advances in the etiology, anatomy, and strategies to diagnose and treat these conditions are described. The numbers of patients with CVD accompanied by auditory dysfunction will increase as the population ages. Cerebrovascular diseases often include the auditory system, resulting in various types of auditory dysfunctions, such as unilateral or bilateral deafness, cortical deafness, pure word deafness, auditory agnosia, and auditory hallucinations, some of which are subtle and can only be detected by precise psychoacoustic and electrophysiological testing. The contribution of CVD to auditory dysfunction needs to be understood because CVD can be fatal if overlooked.

  5. Auditory processing under cross-modal visual load investigated with simultaneous EEG-fMRI.

    Directory of Open Access Journals (Sweden)

    Christina Regenbogen

    Full Text Available Cognitive task demands in one sensory modality (T1 can have beneficial effects on a secondary task (T2 in a different modality, due to reduced top-down control needed to inhibit the secondary task, as well as crossmodal spread of attention. This contrasts findings of cognitive load compromising a secondary modality's processing. We manipulated cognitive load within one modality (visual and studied the consequences of cognitive demands on secondary (auditory processing. 15 healthy participants underwent a simultaneous EEG-fMRI experiment. Data from 8 participants were obtained outside the scanner for validation purposes. The primary task (T1 was to respond to a visual working memory (WM task with four conditions, while the secondary task (T2 consisted of an auditory oddball stream, which participants were asked to ignore. The fMRI results revealed fronto-parietal WM network activations in response to T1 task manipulation. This was accompanied by significantly higher reaction times and lower hit rates with increasing task difficulty which confirmed successful manipulation of WM load. Amplitudes of auditory evoked potentials, representing fundamental auditory processing showed a continuous augmentation which demonstrated a systematic relation to cross-modal cognitive load. With increasing WM load, primary auditory cortices were increasingly deactivated while psychophysiological interaction results suggested the emergence of auditory cortices connectivity with visual WM regions. These results suggest differential effects of crossmodal attention on fundamental auditory processing. We suggest a continuous allocation of resources to brain regions processing primary tasks when challenging the central executive under high cognitive load.

  6. Reality of auditory verbal hallucinations.

    Science.gov (United States)

    Raij, Tuukka T; Valkonen-Korhonen, Minna; Holi, Matti; Therman, Sebastian; Lehtonen, Johannes; Hari, Riitta

    2009-11-01

    Distortion of the sense of reality, actualized in delusions and hallucinations, is the key feature of psychosis but the underlying neuronal correlates remain largely unknown. We studied 11 highly functioning subjects with schizophrenia or schizoaffective disorder while they rated the reality of auditory verbal hallucinations (AVH) during functional magnetic resonance imaging (fMRI). The subjective reality of AVH correlated strongly and specifically with the hallucination-related activation strength of the inferior frontal gyri (IFG), including the Broca's language region. Furthermore, how real the hallucination that subjects experienced was depended on the hallucination-related coupling between the IFG, the ventral striatum, the auditory cortex, the right posterior temporal lobe, and the cingulate cortex. Our findings suggest that the subjective reality of AVH is related to motor mechanisms of speech comprehension, with contributions from sensory and salience-detection-related brain regions as well as circuitries related to self-monitoring and the experience of agency.

  7. Perception of Complex Auditory Patterns.

    Science.gov (United States)

    1987-11-02

    and Piercy, M. (1973). Defects of non - verbal auditory perception in children with developmental aphasia . Nature (London), 241, 468-469. Watson, C.S...LII, zS 4p ETV I Hearing and Communication Laboratory Department of Speech and Hearing Sciences 7 Indiana University Bloomington, Indiana 47405 Final...Technical Report Air Force Office of Scientific Research AFOSR-84-0337 September 1, 1984 to August 31, 1987 Hearing and Communication Laboratory

  8. Auditory based neuropsychology in neurosurgery.

    Science.gov (United States)

    Wester, Knut

    2008-04-01

    In this article, an account is given on the author's experience with auditory based neuropsychology in a clinical, neurosurgical setting. The patients that were included in the studies are patients with traumatic or vascular brain lesions, patients undergoing brain surgery to alleviate symptoms of Parkinson's disease, or patients harbouring an intracranial arachnoid cyst affecting the temporal or the frontal lobe. The aims of these investigations were to collect information about the location of cognitive processes in the human brain, or to disclose dyscognition in patients with an arachnoid cyst. All the patients were tested with the DL technique. In addition, the cyst patients were subjected to a number of non-auditory, standard neuropsychological tests, such as Benton Visual Retention Test, Street Gestalt Test, Stroop Test and Trails Test A and B. The neuropsychological tests revealed that arachnoid cysts in general cause dyscognition that also includes auditory processes, and more importantly, that these cognition deficits normalise after surgical removal of the cyst. These observations constitute strong evidence in favour of surgical decompression.

  9. Auditory brainstem implant program development.

    Science.gov (United States)

    Schwartz, Marc S; Wilkinson, Eric P

    2017-08-01

    Auditory brainstem implants (ABIs), which have previously been used to restore auditory perception to deaf patients with neurofibromatosis type 2 (NF2), are now being utilized in other situations, including treatment of congenitally deaf children with cochlear malformations or cochlear nerve deficiencies. Concurrent with this expansion of indications, the number of centers placing and expressing interest in placing ABIs has proliferated. Because ABI placement involves posterior fossa craniotomy in order to access the site of implantation on the cochlear nucleus complex of the brainstem and is not without significant risk, we aim to highlight issues important in developing and maintaining successful ABI programs that would be in the best interests of patients. Especially with pediatric patients, the ultimate benefits of implantation will be known only after years of growth and development. These benefits have yet to be fully elucidated and continue to be an area of controversy. The limited number of publications in this area were reviewed. Review of the current literature was performed. Disease processes, risk/benefit analyses, degrees of evidence, and U.S. Food and Drug Administration approvals differ among various categories of patients in whom auditory brainstem implantation could be considered for use. We suggest sets of criteria necessary for the development of successful and sustaining ABI programs, including programs for NF2 patients, postlingually deafened adult nonneurofibromatosis type 2 patients, and congenitally deaf pediatric patients. Laryngoscope, 127:1909-1915, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  10. Frequency-Dependent Streaming Potentials: A Review

    Directory of Open Access Journals (Sweden)

    L. Jouniaux

    2012-01-01

    which both formation factor and permeability are measured, is predicted to depend on the permeability as inversely proportional to the permeability. We review the experimental setups built to be able to perform dynamic measurements. And we present some measurements and calculations of the dynamic streaming potential.

  11. Differential responses of primary auditory cortex in autistic spectrum disorder with auditory hypersensitivity.

    Science.gov (United States)

    Matsuzaki, Junko; Kagitani-Shimono, Kuriko; Goto, Tetsu; Sanefuji, Wakako; Yamamoto, Tomoka; Sakai, Saeko; Uchida, Hiroyuki; Hirata, Masayuki; Mohri, Ikuko; Yorifuji, Shiro; Taniike, Masako

    2012-01-25

    The aim of this study was to investigate the differential responses of the primary auditory cortex to auditory stimuli in autistic spectrum disorder with or without auditory hypersensitivity. Auditory-evoked field values were obtained from 18 boys (nine with and nine without auditory hypersensitivity) with autistic spectrum disorder and 12 age-matched controls. Autistic disorder with hypersensitivity showed significantly more delayed M50/M100 peak latencies than autistic disorder without hypersensitivity or the control. M50 dipole moments in the hypersensitivity group were larger than those in the other two groups [corrected]. M50/M100 peak latencies were correlated with the severity of auditory hypersensitivity; furthermore, severe hypersensitivity induced more behavioral problems. This study indicates auditory hypersensitivity in autistic spectrum disorder as a characteristic response of the primary auditory cortex, possibly resulting from neurological immaturity or functional abnormalities in it. © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins.

  12. Attention effects on auditory scene analysis: insights from event-related brain potentials.

    Science.gov (United States)

    Spielmann, Mona Isabel; Schröger, Erich; Kotz, Sonja A; Bendixen, Alexandra

    2014-01-01

    Sounds emitted by different sources arrive at our ears as a mixture that must be disentangled before meaningful information can be retrieved. It is still a matter of debate whether this decomposition happens automatically or requires the listener's attention. These opposite positions partly stem from different methodological approaches to the problem. We propose an integrative approach that combines the logic of previous measurements targeting either auditory stream segregation (interpreting a mixture as coming from two separate sources) or integration (interpreting a mixture as originating from only one source). By means of combined behavioral and event-related potential (ERP) measures, our paradigm has the potential to measure stream segregation and integration at the same time, providing the opportunity to obtain positive evidence of either one. This reduces the reliance on zero findings (i.e., the occurrence of stream integration in a given condition can be demonstrated directly, rather than indirectly based on the absence of empirical evidence for stream segregation, and vice versa). With this two-way approach, we systematically manipulate attention devoted to the auditory stimuli (by varying their task relevance) and to their underlying structure (by delivering perceptual tasks that require segregated or integrated percepts). ERP results based on the mismatch negativity (MMN) show no evidence for a modulation of stream integration by attention, while stream segregation results were less clear due to overlapping attention-related components in the MMN latency range. We suggest future studies combining the proposed two-way approach with some improvements in the ERP measurement of sequential stream segregation.

  13. Auditory neuropathy/Auditory dyssynchrony - An underdiagnosed condition: A case report with review of literature

    OpenAIRE

    Vinish Agarwal; Saurabh Varshney; Sampan Singh Bist; Sanjiv Bhagat; Sarita Mishra; Vivek Jha

    2012-01-01

    Auditory neuropathy (AN)/auditory dyssynchrony (AD) is a very often missed diagnosis, hence an underdiagnosed condition in clinical practice. Auditory neuropathy is a condition in which patients, on audiologic evaluation, are found to have normal outer hair cell function and abnormal neural function at the level of the eighth nerve. These patients, on clinical testing, are found to have normal otoacoustic emissions, whereas auditory brainstem response audiometry reveals the absence of neural ...

  14. The Identification and Remediation of Auditory Problems

    Science.gov (United States)

    Kottler, Sylvia B.

    1972-01-01

    Procedures and sample activities are provided for both identifying and training children with auditory perception problems related to sound localization, sound discrimination, and sound sequencing. (KW)

  15. Human Factors Military Lexicon: Auditory Displays

    National Research Council Canada - National Science Library

    Letowski, Tomasz

    2001-01-01

    .... In addition to definitions specific to auditory displays, speech communication, and audio technology, the lexicon includes several terms unique to military operational environments and human factors...

  16. Developing Auditory Measures of General Speediness

    Directory of Open Access Journals (Sweden)

    Ian T. Zajac

    2011-10-01

    Full Text Available This study examined whether the broad ability general speediness (Gs could be measured via the auditory modality. Existing and purpose-developed auditory tasks that maintained the cognitive requirements of established visually presented Gs markers were completed by 96 university undergraduates. Exploratory and confirmatory factor analyses showed that the auditory tasks combined with established visual measures to define latent Gs and reaction time factors. These findings provide preliminary evidence that suggests that if auditory tasks are developed that maintain the same cognitive requirements as existing visual measures, then they are likely to index similar cognitive processes.

  17. Auditory, visual and auditory-visual memory and sequencing performance in typically developing children.

    Science.gov (United States)

    Pillai, Roshni; Yathiraj, Asha

    2017-09-01

    The study evaluated whether there exists a difference/relation in the way four different memory skills (memory score, sequencing score, memory span, & sequencing span) are processed through the auditory modality, visual modality and combined modalities. Four memory skills were evaluated on 30 typically developing children aged 7 years and 8 years across three modality conditions (auditory, visual, & auditory-visual). Analogous auditory and visual stimuli were presented to evaluate the three modality conditions across the two age groups. The children obtained significantly higher memory scores through the auditory modality compared to the visual modality. Likewise, their memory scores were significantly higher through the auditory-visual modality condition than through the visual modality. However, no effect of modality was observed on the sequencing scores as well as for the memory and the sequencing span. A good agreement was seen between the different modality conditions that were studied (auditory, visual, & auditory-visual) for the different memory skills measures (memory scores, sequencing scores, memory span, & sequencing span). A relatively lower agreement was noted only between the auditory and visual modalities as well as between the visual and auditory-visual modality conditions for the memory scores, measured using Bland-Altman plots. The study highlights the efficacy of using analogous stimuli to assess the auditory, visual as well as combined modalities. The study supports the view that the performance of children on different memory skills was better through the auditory modality compared to the visual modality. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Percent Agriculture Adjacent to Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  19. Comparison of neural responses to cat meows and human vowels in the anterior and posterior auditory field of awake cats.

    Directory of Open Access Journals (Sweden)

    Hanlu Ma

    Full Text Available For humans and animals, the ability to discriminate speech and conspecific vocalizations is an important physiological assignment of the auditory system. To reveal the underlying neural mechanism, many electrophysiological studies have investigated the neural responses of the auditory cortex to conspecific vocalizations in monkeys. The data suggest that vocalizations may be hierarchically processed along an anterior/ventral stream from the primary auditory cortex (A1 to the ventral prefrontal cortex. To date, the organization of vocalization processing has not been well investigated in the auditory cortex of other mammals. In this study, we examined the spike activities of single neurons in two early auditory cortical regions with different anteroposterior locations: anterior auditory field (AAF and posterior auditory field (PAF in awake cats, as the animals were passively listening to forward and backward conspecific calls (meows and human vowels. We found that the neural response patterns in PAF were more complex and had longer latency than those in AAF. The selectivity for different vocalizations based on the mean firing rate was low in both AAF and PAF, and not significantly different between them; however, more vocalization information was transmitted when the temporal response profiles were considered, and the maximum transmitted information by PAF neurons was higher than that by AAF neurons. Discrimination accuracy based on the activities of an ensemble of PAF neurons was also better than that of AAF neurons. Our results suggest that AAF and PAF are similar with regard to which vocalizations they represent but differ in the way they represent these vocalizations, and there may be a complex processing stream between them.

  20. Auditory Processing Disorders (APD): a distinct clinical disorder or not?

    NARCIS (Netherlands)

    Ellen de Wit

    2015-01-01

    Presentatie CPLOL congres Florence In this systematic review, six electronic databases were searched for peer-reviewed studies using the key words auditory processing, auditory diseases, central [Mesh], and auditory perceptual. Two reviewers independently assessed relevant studies by inclusion

  1. Computational Auditory Scene Analysis Based Perceptual and Neural Principles

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2004-01-01

    .... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...

  2. Stuttering adults' lack of pre-speech auditory modulation normalizes when speaking with delayed auditory feedback.

    Science.gov (United States)

    Daliri, Ayoub; Max, Ludo

    2018-02-01

    Auditory modulation during speech movement planning is limited in adults who stutter (AWS), but the functional relevance of the phenomenon itself remains unknown. We investigated for AWS and adults who do not stutter (AWNS) (a) a potential relationship between pre-speech auditory modulation and auditory feedback contributions to speech motor learning and (b) the effect on pre-speech auditory modulation of real-time versus delayed auditory feedback. Experiment I used a sensorimotor adaptation paradigm to estimate auditory-motor speech learning. Using acoustic speech recordings, we quantified subjects' formant frequency adjustments across trials when continually exposed to formant-shifted auditory feedback. In Experiment II, we used electroencephalography to determine the same subjects' extent of pre-speech auditory modulation (reductions in auditory evoked potential N1 amplitude) when probe tones were delivered prior to speaking versus not speaking. To manipulate subjects' ability to monitor real-time feedback, we included speaking conditions with non-altered auditory feedback (NAF) and delayed auditory feedback (DAF). Experiment I showed that auditory-motor learning was limited for AWS versus AWNS, and the extent of learning was negatively correlated with stuttering frequency. Experiment II yielded several key findings: (a) our prior finding of limited pre-speech auditory modulation in AWS was replicated; (b) DAF caused a decrease in auditory modulation for most AWNS but an increase for most AWS; and (c) for AWS, the amount of auditory modulation when speaking with DAF was positively correlated with stuttering frequency. Lastly, AWNS showed no correlation between pre-speech auditory modulation (Experiment II) and extent of auditory-motor learning (Experiment I) whereas AWS showed a negative correlation between these measures. Thus, findings suggest that AWS show deficits in both pre-speech auditory modulation and auditory-motor learning; however, limited pre

  3. Attentional demands modulate sensorimotor learning induced by persistent exposure to changes in auditory feedback.

    Science.gov (United States)

    Scheerer, Nichole E; Tumber, Anupreet K; Jones, Jeffery A

    2016-02-01

    Hearing one's own voice is important for regulating ongoing speech and for mapping speech sounds onto articulator movements. However, it is currently unknown whether attention mediates changes in the relationship between motor commands and their acoustic output, which are necessary as growth and aging inevitably cause changes to the vocal tract. In this study, participants produced vocalizations while they heard their vocal pitch persistently shifted downward one semitone in both single- and dual-task conditions. During the single-task condition, participants vocalized while passively viewing a visual stream. During the dual-task condition, participants vocalized while also monitoring a visual stream for target letters, forcing participants to divide their attention. Participants' vocal pitch was measured across each vocalization, to index the extent to which their ongoing vocalization was modified as a result of the deviant auditory feedback. Smaller compensatory responses were recorded during the dual-task condition, suggesting that divided attention interfered with the use of auditory feedback for the regulation of ongoing vocalizations. Participants' vocal pitch was also measured at the beginning of each vocalization, before auditory feedback was available, to assess the extent to which the deviant auditory feedback was used to modify subsequent speech motor commands. Smaller changes in vocal pitch at vocalization onset were recorded during the dual-task condition, suggesting that divided attention diminished sensorimotor learning. Together, the results of this study suggest that attention is required for the speech motor control system to make optimal use of auditory feedback for the regulation and planning of speech motor commands. Copyright © 2016 the American Physiological Society.

  4. Acoustic streaming in microchannels

    DEFF Research Database (Denmark)

    Tribler, Peter Muller

    , the acoustic streaming flow, and the forces on suspended microparticles. The work is motivated by the application of particle focusing by acoustic radiation forces in medical, environmental and food sciences. Here acoustic streaming is most often unwanted, because it limits the focusability of particles...... oscillating plates. Furthermore, under general thermodynamic conditions, we derive the time-dependent first- and second-order equations for the conservation of mass, momentum, and energy. The coupling from fluid equations to particle motion is achieved through the expressions for the streaming-induced drag...

  5. Auditory Association Cortex Lesions Impair Auditory Short-Term Memory in Monkeys

    Science.gov (United States)

    Colombo, Michael; D'Amato, Michael R.; Rodman, Hillary R.; Gross, Charles G.

    1990-01-01

    Monkeys that were trained to perform auditory and visual short-term memory tasks (delayed matching-to-sample) received lesions of the auditory association cortex in the superior temporal gyrus. Although visual memory was completely unaffected by the lesions, auditory memory was severely impaired. Despite this impairment, all monkeys could discriminate sounds closer in frequency than those used in the auditory memory task. This result suggests that the superior temporal cortex plays a role in auditory processing and retention similar to the role the inferior temporal cortex plays in visual processing and retention.

  6. Active Auditory Mechanics in Insects

    Science.gov (United States)

    Robert, D.; Göpfert, M. C.

    2003-02-01

    Evidence is presented that hearing in some insects is an active process. Audition in mosquitoes is used for mate-detection and is supported by antennal receivers, whose sound-induced vibrations are transduced by Johnston's organs. Each of these sensory organs contains ca. 15,000 sensory neurons. As shown by mechanical analysis, a physiologically vulnerable mechanism is at work that nonlinearly enhances the sensitivity and frequency selectivity of antennal hearing. This process of amplification correlates with the electrical activity of the auditory mechanoreceptor units in Johnston's organ.

  7. Academic streaming in Europe

    DEFF Research Database (Denmark)

    Falaschi, Alessandro; Mønster, Dan; Doležal, Ivan

    2004-01-01

    The TF-NETCAST task force was active from March 2003 to March 2004, and during this time the mem- bers worked on various aspects of streaming media related to the ultimate goal of setting up common services and infrastructures to enable netcasting of high quality content to the academic community...... in Europe. We report on a survey of the use of streaming media in the academic community in Europe, an open source content delivery network, and a portal for announcing live streaming events to the global academic community.......The TF-NETCAST task force was active from March 2003 to March 2004, and during this time the mem- bers worked on various aspects of streaming media related to the ultimate goal of setting up common services and infrastructures to enable netcasting of high quality content to the academic community...

  8. Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  9. Future Roads Near Streams

    Data.gov (United States)

    U.S. Environmental Protection Agency — Roads are a source of auto related pollutants (e.g. gasoline, oil and other engine fluids). When roads are near streams, rain can wash these pollutants directly into...

  10. Channelized Streams in Iowa

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — This draft dataset consists of all ditches or channelized pieces of stream that could be identified using three input datasets; namely the1:24,000 National...

  11. Streaming tearing mode

    Science.gov (United States)

    Shigeta, M.; Sato, T.; Dasgupta, B.

    1985-01-01

    The magnetohydrodynamic stability of streaming tearing mode is investigated numerically. A bulk plasma flow parallel to the antiparallel magnetic field lines and localized in the neutral sheet excites a streaming tearing mode more strongly than the usual tearing mode, particularly for the wavelength of the order of the neutral sheet width (or smaller), which is stable for the usual tearing mode. Interestingly, examination of the eigenfunctions of the velocity perturbation and the magnetic field perturbation indicates that the streaming tearing mode carries more energy in terms of the kinetic energy rather than the magnetic energy. This suggests that the streaming tearing mode instability can be a more feasible mechanism of plasma acceleration than the usual tearing mode instability.

  12. Streaming media bible

    National Research Council Canada - National Science Library

    Mack, Steve

    2002-01-01

    This book "tells you everything you need to know to produce professional-quality streaming media for the Internet, from an overview of the available systems and tools to high-end techniques for top quality results...

  13. Scientific stream pollution analysis

    National Research Council Canada - National Science Library

    Nemerow, Nelson Leonard

    1974-01-01

    A comprehensive description of the analysis of water pollution that presents a careful balance of the biological,hydrological, chemical and mathematical concepts involved in the evaluation of stream...

  14. DNR 24K Streams

    Data.gov (United States)

    Minnesota Department of Natural Resources — 1:24,000 scale streams captured from USGS seven and one-half minute quadrangle maps, with perennial vs. intermittent classification, and connectivity through lakes,...

  15. Central auditory processing assessment: a French-speaking battery.

    Science.gov (United States)

    Demanez, L; Dony-Closon, B; Lhonneux-Ledoux, E; Demanez, J P

    2003-01-01

    Based on the American Speech-Language-Hearing Association (ASHA) Consensus Statement on central auditory processing and models for their exploration, a battery of audiological tests (Bilan Auditif Central--BAC) has been designed in French. The BAC consists of four types of psycho-acoustic tests: a speech-in-noise test, a dichotic test, a temporal processing test and a binaural interaction test. We briefly describe the rationale of these tests. The BAC is available in digital format. Descriptive statistics were computed on data obtained from 668 subjects divided into 15 age-groups ranging from 5 to 85 years old or over. All subjects had no complaints regarding hearing loss, normal tonal audiometry, and normal intelligence. Tests scores of the speech-in-noise test, the dichotic test and the binaural interaction test showed a normal distribution. Test scores of the temporal processing test did not follow a normal distribution. Effects of maturation and involution were clearly visible for all tests. The low correlation between scores obtained from the four tests pointed to the need for a battery of several tests to assess central auditory processing. We claim that the reported scores represent standard norms for the normal French-speaking population, and believe that the tests will be useful for evaluation of central auditory processing.

  16. Primary Auditory Cortex Regulates Threat Memory Specificity

    Science.gov (United States)

    Wigestrand, Mattis B.; Schiff, Hillary C.; Fyhn, Marianne; LeDoux, Joseph E.; Sears, Robert M.

    2017-01-01

    Distinguishing threatening from nonthreatening stimuli is essential for survival and stimulus generalization is a hallmark of anxiety disorders. While auditory threat learning produces long-lasting plasticity in primary auditory cortex (Au1), it is not clear whether such Au1 plasticity regulates memory specificity or generalization. We used…

  17. Auditory Processing Disorder and Foreign Language Acquisition

    Science.gov (United States)

    Veselovska, Ganna

    2015-01-01

    This article aims at exploring various strategies for coping with the auditory processing disorder in the light of foreign language acquisition. The techniques relevant to dealing with the auditory processing disorder can be attributed to environmental and compensatory approaches. The environmental one involves actions directed at creating a…

  18. Further Evidence of Auditory Extinction in Aphasia

    Science.gov (United States)

    Marshall, Rebecca Shisler; Basilakos, Alexandra; Love-Myers, Kim

    2013-01-01

    Purpose: Preliminary research ( Shisler, 2005) suggests that auditory extinction in individuals with aphasia (IWA) may be connected to binding and attention. In this study, the authors expanded on previous findings on auditory extinction to determine the source of extinction deficits in IWA. Method: Seventeen IWA (M[subscript age] = 53.19 years)…

  19. Gulf stream separation dynamics

    Science.gov (United States)

    Schoonover, Joseph

    Climate models currently struggle with the more traditional, coarse ( O(100 km) ) representation of the ocean. In these coarse ocean simulations, western boundary currents are notoriously difficult to model accurately. The modeled Gulf Stream is typically seen exhibiting a mean pathway that is north of observations, and is linked to a warm sea-surface temperature bias in the Mid-Atlantic Bight. Although increased resolution ( O(10 km) ) improves the modeled Gulf Stream position, there is no clean recipe for obtaining the proper pathway. The 70 year history of literature on the Gulf Stream separation suggests that we have not reached a resolution on the dynamics that control the current's pathway just south of the Mid-Atlantic Bight. Without a concrete knowledge on the separation dynamics, we cannot provide a clean recipe for accurately modeling the Gulf Stream at increased resolutions. Further, any reliable parameterization that yields a realistic Gulf Stream path must express the proper physics of separation. The goal of this dissertation is to determine what controls the Gulf Stream separation. To do so, we examine the results of a model intercomparison study and a set of numerical regional terraforming experiments. It is argued that the separation is governed by local dynamics that are most sensitive to the steepening of the continental shelf, consistent with the topographic wave arrest hypothesis of Stern (1998). A linear extension of Stern's theory is provided, which illustrates that wave arrest is possible for a continuously stratified fluid.

  20. Streaming Pool: reuse, combine and create reactive streams with pleasure

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    When connecting together heterogeneous and complex systems, it is not easy to exchange data between components. Streams of data are successfully used in industry in order to overcome this problem, especially in the case of "live" data. Streams are a specialization of the Observer design pattern and they provide asynchronous and non-blocking data flow. The ongoing effort of the ReactiveX initiative is one example that demonstrates how demanding this technology is even for big companies. Bridging the discrepancies of different technologies with common interfaces is already done by the Reactive Streams initiative and, in the JVM world, via reactive-streams-jvm interfaces. Streaming Pool is a framework for providing and discovering reactive streams. Through the mechanism of dependency injection provided by the Spring Framework, Streaming Pool provides a so called Discovery Service. This object can discover and chain streams of data that are technologically agnostic, through the use of Stream IDs. The stream to ...

  1. [Symptoms and diagnosis of auditory processing disorder].

    Science.gov (United States)

    Keilmann, A; Läßig, A K; Nospes, S

    2013-08-01

    The definition of an auditory processing disorder (APD) is based on impairments of auditory functions. APDs are disturbances in processes central to hearing that cannot be explained by comorbidities such as attention deficit or language comprehension disorders. Symptoms include difficulties in differentiation and identification of changes in time, structure, frequency and intensity of sounds; problems with sound localization and lateralization, as well as poor speech comprehension in adverse listening environments and dichotic situations. According to the German definition of APD (as opposed to central auditory processing disorder, CAPD), peripheral hearing loss or cognitive impairment also exclude APD. The diagnostic methodology comprises auditory function tests and the required diagnosis of exclusion. APD is diagnosed if a patient's performance is two standard deviations below the normal mean in at least two areas of auditory processing. The treatment approach for an APD depends on the patient's particular deficits. Training, compensatory strategies and improvement of the listening conditions can all be effective.

  2. Looming biases in monkey auditory cortex.

    Science.gov (United States)

    Maier, Joost X; Ghazanfar, Asif A

    2007-04-11

    Looming signals (signals that indicate the rapid approach of objects) are behaviorally relevant signals for all animals. Accordingly, studies in primates (including humans) reveal attentional biases for detecting and responding to looming versus receding signals in both the auditory and visual domains. We investigated the neural representation of these dynamic signals in the lateral belt auditory cortex of rhesus monkeys. By recording local field potential and multiunit spiking activity while the subjects were presented with auditory looming and receding signals, we show here that auditory cortical activity was biased in magnitude toward looming versus receding stimuli. This directional preference was not attributable to the absolute intensity of the sounds nor can it be attributed to simple adaptation, because white noise stimuli with identical amplitude envelopes did not elicit the same pattern of responses. This asymmetrical representation of looming versus receding sounds in the lateral belt auditory cortex suggests that it is an important node in the neural network correlate of looming perception.

  3. Auditory Midbrain Implant: A Review

    Science.gov (United States)

    Lim, Hubert H.; Lenarz, Minoo; Lenarz, Thomas

    2009-01-01

    The auditory midbrain implant (AMI) is a new hearing prosthesis designed for stimulation of the inferior colliculus in deaf patients who cannot sufficiently benefit from cochlear implants. The authors have begun clinical trials in which five patients have been implanted with a single shank AMI array (20 electrodes). The goal of this review is to summarize the development and research that has led to the translation of the AMI from a concept into the first patients. This study presents the rationale and design concept for the AMI as well a summary of the animal safety and feasibility studies that were required for clinical approval. The authors also present the initial surgical, psychophysical, and speech results from the first three implanted patients. Overall, the results have been encouraging in terms of the safety and functionality of the implant. All patients obtain improvements in hearing capabilities on a daily basis. However, performance varies dramatically across patients depending on the implant location within the midbrain with the best performer still not able to achieve open set speech perception without lip-reading cues. Stimulation of the auditory midbrain provides a wide range of level, spectral, and temporal cues, all of which are important for speech understanding, but they do not appear to sufficiently fuse together to enable open set speech perception with the currently used stimulation strategies. Finally, several issues and hypotheses for why current patients obtain limited speech perception along with several feasible solutions for improving AMI implementation are presented. PMID:19762428

  4. Tactile feedback improves auditory spatial localization.

    Science.gov (United States)

    Gori, Monica; Vercillo, Tiziana; Sandini, Giulio; Burr, David

    2014-01-01

    Our recent studies suggest that congenitally blind adults have severely impaired thresholds in an auditory spatial bisection task, pointing to the importance of vision in constructing complex auditory spatial maps (Gori et al., 2014). To explore strategies that may improve the auditory spatial sense in visually impaired people, we investigated the impact of tactile feedback on spatial auditory localization in 48 blindfolded sighted subjects. We measured auditory spatial bisection thresholds before and after training, either with tactile feedback, verbal feedback, or no feedback. Audio thresholds were first measured with a spatial bisection task: subjects judged whether the second sound of a three sound sequence was spatially closer to the first or the third sound. The tactile feedback group underwent two audio-tactile feedback sessions of 100 trials, where each auditory trial was followed by the same spatial sequence played on the subject's forearm; auditory spatial bisection thresholds were evaluated after each session. In the verbal feedback condition, the positions of the sounds were verbally reported to the subject after each feedback trial. The no feedback group did the same sequence of trials, with no feedback. Performance improved significantly only after audio-tactile feedback. The results suggest that direct tactile feedback interacts with the auditory spatial localization system, possibly by a process of cross-sensory recalibration. Control tests with the subject rotated suggested that this effect occurs only when the tactile and acoustic sequences are spatially congruent. Our results suggest that the tactile system can be used to recalibrate the auditory sense of space. These results encourage the possibility of designing rehabilitation programs to help blind persons establish a robust auditory sense of space, through training with the tactile modality.

  5. A Brain System for Auditory Working Memory.

    Science.gov (United States)

    Kumar, Sukhbinder; Joseph, Sabine; Gander, Phillip E; Barascud, Nicolas; Halpern, Andrea R; Griffiths, Timothy D

    2016-04-20

    The brain basis for auditory working memory, the process of actively maintaining sounds in memory over short periods of time, is controversial. Using functional magnetic resonance imaging in human participants, we demonstrate that the maintenance of single tones in memory is associated with activation in auditory cortex. In addition, sustained activation was observed in hippocampus and inferior frontal gyrus. Multivoxel pattern analysis showed that patterns of activity in auditory cortex and left inferior frontal gyrus distinguished the tone that was maintained in memory. Functional connectivity during maintenance was demonstrated between auditory cortex and both the hippocampus and inferior frontal cortex. The data support a system for auditory working memory based on the maintenance of sound-specific representations in auditory cortex by projections from higher-order areas, including the hippocampus and frontal cortex. In this work, we demonstrate a system for maintaining sound in working memory based on activity in auditory cortex, hippocampus, and frontal cortex, and functional connectivity among them. Specifically, our work makes three advances from the previous work. First, we robustly demonstrate hippocampal involvement in all phases of auditory working memory (encoding, maintenance, and retrieval): the role of hippocampus in working memory is controversial. Second, using a pattern classification technique, we show that activity in the auditory cortex and inferior frontal gyrus is specific to the maintained tones in working memory. Third, we show long-range connectivity of auditory cortex to hippocampus and frontal cortex, which may be responsible for keeping such representations active during working memory maintenance. Copyright © 2016 Kumar et al.

  6. My oh my(osin): Insights into how auditory hair cells count, measure, and shape.

    Science.gov (United States)

    Pollock, Lana M; Chou, Shih-Wei; McDermott, Brian M

    2016-01-18

    The mechanisms underlying mechanosensory hair bundle formation in auditory sensory cells are largely mysterious. In this issue, Lelli et al. (2016. J. Cell Biol. http://dx.doi.org/10.1083/jcb.201509017) reveal that a pair of molecular motors, myosin IIIa and myosin IIIb, is involved in the hair bundle's morphology and hearing. © 2016 Pollock et al.

  7. Stream-fed accretion in intermediate polars

    Science.gov (United States)

    Hellier, C.

    2002-01-01

    I review the observational evidence for stream-fed accretion in intermediate polars. Recent work on the discless system V2400 Oph confirms the pole-flipping model of stream-fed accretion, but this applies only to a minority of the flow. The bulk of the flow is in the form of blobs circling the white dwarf, a state which might have been a precursor to disc formation in other IPs. I also discuss work on the systems with anomalously long spin periods, V1025 Cen and EX Hya. There are arguments both for and against stream-fed accretion in V1025 Cen, and further work is necessary before reaching a conclusion about this system.

  8. Streams and their future inhabitants

    DEFF Research Database (Denmark)

    Sand-Jensen, K.; Friberg, N.

    2006-01-01

    In this fi nal chapter we look ahead and address four questions: How do we improve stream management? What are the likely developments in the biological quality of streams? In which areas is knowledge on stream ecology insuffi cient? What can streams offer children of today and adults of tomorrow?...

  9. Auditory agnosia due to long-term severe hydrocephalus caused by spina bifida - specific auditory pathway versus nonspecific auditory pathway.

    Science.gov (United States)

    Zhang, Qing; Kaga, Kimitaka; Hayashi, Akimasa

    2011-07-01

    A 27-year-old female showed auditory agnosia after long-term severe hydrocephalus due to congenital spina bifida. After years of hydrocephalus, she gradually suffered from hearing loss in her right ear at 19 years of age, followed by her left ear. During the time when she retained some ability to hear, she experienced severe difficulty in distinguishing verbal, environmental, and musical instrumental sounds. However, her auditory brainstem response and distortion product otoacoustic emissions were largely intact in the left ear. Her bilateral auditory cortices were preserved, as shown by neuroimaging, whereas her auditory radiations were severely damaged owing to progressive hydrocephalus. Although she had a complete bilateral hearing loss, she felt great pleasure when exposed to music. After years of self-training to read lips, she regained fluent ability to communicate. Clinical manifestations of this patient indicate that auditory agnosia can occur after long-term hydrocephalus due to spina bifida; the secondary auditory pathway may play a role in both auditory perception and hearing rehabilitation.

  10. Attention, memory, and auditory processing in 10- to 15-year-old children with listening difficulties.

    Science.gov (United States)

    Sharma, Mridula; Dhamani, Imran; Leung, Johahn; Carlile, Simon

    2014-12-01

    The aim of this study was to examine attention, memory, and auditory processing in children with reported listening difficulty in noise (LDN) despite having clinically normal hearing. Twenty-one children with LDN and 15 children with no listening concerns (controls) participated. The clinically normed auditory processing tests included the Frequency/Pitch Pattern Test (FPT; Musiek, 2002), the Dichotic Digits Test (Musiek, 1983), the Listening in Spatialized Noise-Sentences (LiSN-S) test (Dillon, Cameron, Glyde, Wilson, & Tomlin, 2012), gap detection in noise (Baker, Jayewardene, Sayle, & Saeed, 2008), and masking level difference (MLD; Wilson, Moncrieff, Townsend, & Pillion, 2003). Also included were research-based psychoacoustic tasks, such as auditory stream segregation, localization, sinusoidal amplitude modulation (SAM), and fine structure perception. All were also evaluated on attention and memory test batteries. The LDN group was significantly slower switching their auditory attention and had poorer inhibitory control. Additionally, the group mean results showed significantly poorer performance on FPT, MLD, 4-Hz SAM, and memory tests. Close inspection of the individual data revealed that only 5 participants (out of 21) in the LDN group showed significantly poor performance on FPT compared with clinical norms. Further testing revealed the frequency discrimination of these 5 children to be significantly impaired. Thus, the LDN group showed deficits in attention switching and inhibitory control, whereas only a subset of these participants demonstrated an additional frequency resolution deficit.

  11. Neural Representation of Concurrent Vowels in Macaque Primary Auditory Cortex123

    Science.gov (United States)

    Micheyl, Christophe; Steinschneider, Mitchell

    2016-01-01

    Abstract Successful speech perception in real-world environments requires that the auditory system segregate competing voices that overlap in frequency and time into separate streams. Vowels are major constituents of speech and are comprised of frequencies (harmonics) that are integer multiples of a common fundamental frequency (F0). The pitch and identity of a vowel are determined by its F0 and spectral envelope (formant structure), respectively. When two spectrally overlapping vowels differing in F0 are presented concurrently, they can be readily perceived as two separate “auditory objects” with pitches at their respective F0s. A difference in pitch between two simultaneous vowels provides a powerful cue for their segregation, which in turn, facilitates their individual identification. The neural mechanisms underlying the segregation of concurrent vowels based on pitch differences are poorly understood. Here, we examine neural population responses in macaque primary auditory cortex (A1) to single and double concurrent vowels (/a/ and /i/) that differ in F0 such that they are heard as two separate auditory objects with distinct pitches. We find that neural population responses in A1 can resolve, via a rate-place code, lower harmonics of both single and double concurrent vowels. Furthermore, we show that the formant structures, and hence the identities, of single vowels can be reliably recovered from the neural representation of double concurrent vowels. We conclude that A1 contains sufficient spectral information to enable concurrent vowel segregation and identification by downstream cortical areas. PMID:27294198

  12. Diminished N1 auditory evoked potentials to oddball stimuli in misophonia patients

    Directory of Open Access Journals (Sweden)

    Arjan eSchröder

    2014-04-01

    Full Text Available Misophonia (hatred of sound is a newly defined psychiatric condition in which ordinary human sounds, such as breathing and eating, trigger impulsive aggression. In the current study we investigated if a dysfunction in the brain’s early auditory processing system could be present in misophonia. We screened 20 patients with misophonia with the diagnostic criteria for misophonia, and 14 matched healthy controls without misophonia, and investigated any potential deficits in auditory processing of misophonia patients using auditory event-related potentials (ERPs during an oddball task.Subjects watched a neutral silent movie while being presented a regular frequency of beep sounds in which oddball tones of 250 Hz and 4000 Hz were randomly embedded in a stream of repeated 1000 Hz standard tones. We examined the P1, N1 and P2 components locked to the onset of the tones.For misophonia patients, the N1 peak evoked by the oddball tones had a smaller mean peak amplitude than the control group. However, no significant differences were found in P1 and P2 components evoked by the oddball tones. There were no significant differences between the misophonia patients and their controls in any of the ERP components to the standard tones.The diminished N1 component to oddball tones in misophonia patients suggests an underlying neurobiological deficit in misophonia patients. This reduction might reflect a basic impairment in auditory processing in misophonia patients.

  13. Tracking cortical entrainment in neural activity: Auditory processes in human temporal cortex

    Directory of Open Access Journals (Sweden)

    Andrew eThwaites

    2015-02-01

    Full Text Available A primary objective for cognitive neuroscience is to identify how features of the sensory environment are encoded in neural activity. Current auditory models of loudness perception can be used to make detailed predictions about the neural activity of the cortex as an individual listens to speech. We used two such models (loudness-sones and loudness-phons, varying in their psychophysiological realism, to predict the instantaneous loudness contours produced by 480 isolated words. These two sets of 480 contours were used to search for electrophysiological evidence of loudness processing in whole-brain recordings of electro- and magneto-encephalographic (EMEG activity, recorded while subjects listened to the words. The technique identified a bilateral sequence of loudness processes, predicted by the more realistic loudness-sones model, that begin in auditory cortex at ~80 ms and subsequently reappear, tracking progressively down the superior temporal sulcus (STS at lags from 230 to 330 ms. The technique was then extended to search for regions sensitive to the fundamental frequency (F0 of the voiced parts of the speech. It identified a bilateral F0 process in auditory cortex at a lag of ~90 ms, which was not followed by activity in STS. The results suggest that loudness information is being used to guide the analysis of the speech stream as it proceeds beyond auditory cortex down STS towards the temporal pole.

  14. Atypical brain responses to auditory spatial cues in adults with autism spectrum disorder.

    Science.gov (United States)

    Lodhia, Veema; Hautus, Michael J; Johnson, Blake W; Brock, Jon

    2017-09-09

    The auditory processing atypicalities experienced by many individuals on the autism spectrum disorder might be understood in terms of difficulties parsing the sound energy arriving at the ears into discrete auditory 'objects'. Here, we asked whether autistic adults are able to make use of two important spatial cues to auditory object formation - the relative timing and amplitude of sound energy at the left and right ears. Using electroencephalography, we measured the brain responses of 15 autistic adults and 15 age- and verbal-IQ-matched control participants as they listened to dichotic pitch stimuli - white noise stimuli in which interaural timing or amplitude differences applied to a narrow frequency band of noise typically lead to the perception of a pitch sound that is spatially segregated from the noise. Responses were contrasted with those to stimuli in which timing and amplitude cues were removed. Consistent with our previous studies, autistic adults failed to show a significant object-related negativity (ORN) for timing-based pitch, although their ORN was not significantly smaller than that of the control group. Autistic participants did show an ORN to amplitude cues, indicating that they do not experience a general impairment in auditory object formation. However, their P400 response - thought to indicate the later attention-dependent aspects of auditory object formation - was missing. These findings provide further evidence of atypical auditory object processing in autism with potential implications for understanding the perceptual and communication difficulties associated with the condition. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Perturbing Streaming in Dictyostelium discoidium Aggregation

    Science.gov (United States)

    Rericha, Erin; Garcia, Gene; Parent, Carole; Losert, Wolfgang

    2009-03-01

    The ability of cells to move towards environmental cues is a critical process allowing the destruction of intruders by the immune system, the formation of the vascular system and the whole scale remodeling of tissues during embryo development. We examine the initial transition from single cell to group migration in the social amoeba Dictyostelium discoidium. Upon starvation, D. discoidium cells enter into a developmental program that triggers solitary cells to aggregate into a multicellular structure. The aggregation is mediated by the small molecule, cyclic-AMP, that cells sense, synthesize, secrete and migrate towards often in a head-to-tail fashion called a stream. Using experiment and numerical simulation, we study the sensitivity of streams to perturbations in the cyclic-AMP concentration field. We find the stability of the streams requires cells to shape the cyclic-AMP field through localized secretion and degradation. In addition, we find the streaming phenotype is sensitive to changes in the substrate properties, with slicker surfaces leading to longer more branched streams that yield large initial aggregates.

  16. Auditory memory function in expert chess players.

    Science.gov (United States)

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert chess players using the Persian version of dichotic auditory-verbal memory test. The Persian version of dichotic auditory-verbal memory test was performed for 30 expert chess players aged 20-35 years and 30 non chess players who were matched by different conditions; the participants in both groups were randomly selected. The performance of the two groups was compared by independent samples t-test using SPSS version 21. The mean score of dichotic auditory-verbal memory test between the two groups, expert chess players and non-chess players, revealed a significant difference (p≤ 0.001). The difference between the ears scores for expert chess players (p= 0.023) and non-chess players (p= 0.013) was significant. Gender had no effect on the test results. Auditory memory function in expert chess players was significantly better compared to non-chess players. It seems that increased auditory memory function is related to strengthening cognitive performances due to playing chess for a long time.

  17. Looming auditory collision warnings for driving.

    Science.gov (United States)

    Gray, Rob

    2011-02-01

    A driving simulator was used to compare the effectiveness of increasing intensity (looming) auditory warning signals with other types of auditory warnings. Auditory warnings have been shown to speed driver reaction time in rear-end collision situations; however, it is not clear which type of signal is the most effective. Although verbal and symbolic (e.g., a car horn) warnings have faster response times than abstract warnings, they often lead to more response errors. Participants (N=20) experienced four nonlooming auditory warnings (constant intensity, pulsed, ramped, and car horn), three looming auditory warnings ("veridical," "early," and "late"), and a no-warning condition. In 80% of the trials, warnings were activated when a critical response was required, and in 20% of the trials, the warnings were false alarms. For the early (late) looming warnings, the rate of change of intensity signaled a time to collision (TTC) that was shorter (longer) than the actual TTC. Veridical looming and car horn warnings had significantly faster brake reaction times (BRT) compared with the other nonlooming warnings (by 80 to 160 ms). However, the number of braking responses in false alarm conditions was significantly greater for the car horn. BRT increased significantly and systematically as the TTC signaled by the looming warning was changed from early to veridical to late. Looming auditory warnings produce the best combination of response speed and accuracy. The results indicate that looming auditory warnings can be used to effectively warn a driver about an impending collision.

  18. Auditory Impairment in Young Type 1 Diabetics.

    Science.gov (United States)

    Hou, Yanlian; Xiao, Xiaoyan; Ren, Jianmin; Wang, Yajuan; Zhao, Faming

    2015-10-01

    More attention has recently been focused on auditory impairment of young type 1 diabetics. This study aimed to evaluate auditory function of young type 1 diabetics and the correlation between clinical indexes and hearing impairment. We evaluated the auditory function of 50 type 1 diabetics and 50 healthy subjects. Clinical indexes were measured along with analyzing their relation of auditory function. Type 1 diabetic patients demonstrated a deficit with elevated thresholds at right ear and left ear when compared to healthy controls (p p V and interwave I-V) and left ear (wave III, V and interwave I-III, I-V) in diabetic group significantly increased compared to those in control subjects (p p p p p <0.01). Type 1 diabetics exerted higher auditory threshold, slower auditory conduction time and cochlear impairment. HDL-cholesterol, diabetes duration, systemic blood pressure, microalbuminuria, GHbA1C, triglyceride, and age may affect the auditory function of type 1 diabetics. Copyright © 2015 IMSS. Published by Elsevier Inc. All rights reserved.

  19. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences.

    Science.gov (United States)

    Getzmann, Stephan; Lewald, Jörg; Falkenstein, Michael

    2014-01-01

    Speech understanding in complex and dynamic listening environments requires (a) auditory scene analysis, namely auditory object formation and segregation, and (b) allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called "cocktail-party" problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values) from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  20. Using auditory pre-information to solve the cocktail-party problem: electrophysiological evidence for age-specific differences

    Directory of Open Access Journals (Sweden)

    Stephan eGetzmann

    2014-12-01

    Full Text Available Speech understanding in complex and dynamic listening environments requires (a auditory scene analysis, namely auditory object formation and segregation, and (b allocation of the attentional focus to the talker of interest. There is evidence that pre-information is actively used to facilitate these two aspects of the so-called cocktail-party problem. Here, a simulated multi-talker scenario was combined with electroencephalography to study scene analysis and allocation of attention in young and middle-aged adults. Sequences of short words (combinations of brief company names and stock-price values from four talkers at different locations were simultaneously presented, and the detection of target names and the discrimination between critical target values were assessed. Immediately prior to speech sequences, auditory pre-information was provided via cues that either prepared auditory scene analysis or attentional focusing, or non-specific pre-information was given. While performance was generally better in younger than older participants, both age groups benefited from auditory pre-information. The analysis of the cue-related event-related potentials revealed age-specific differences in the use of pre-cues: Younger adults showed a pronounced N2 component, suggesting early inhibition of concurrent speech stimuli; older adults exhibited a stronger late P3 component, suggesting increased resource allocation to process the pre-information. In sum, the results argue for an age-specific utilization of auditory pre-information to improve listening in complex dynamic auditory environments.

  1. Auditory Connections and Functions of Prefrontal Cortex

    Directory of Open Access Journals (Sweden)

    Bethany ePlakke

    2014-07-01

    Full Text Available The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC. In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.

  2. Auditory connections and functions of prefrontal cortex

    Science.gov (United States)

    Plakke, Bethany; Romanski, Lizabeth M.

    2014-01-01

    The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition. PMID:25100931

  3. Neural sensitivity to statistical regularities as a fundamental biological process that underlies auditory learning: the role of musical practice.

    Science.gov (United States)

    François, Clément; Schön, Daniele

    2014-02-01

    There is increasing evidence that humans and other nonhuman mammals are sensitive to the statistical structure of auditory input. Indeed, neural sensitivity to statistical regularities seems to be a fundamental biological property underlying auditory learning. In the case of speech, statistical regularities play a crucial role in the acquisition of several linguistic features, from phonotactic to more complex rules such as morphosyntactic rules. Interestingly, a similar sensitivity has been shown with non-speech streams: sequences of sounds changing in frequency or timbre can be segmented on the sole basis of conditional probabilities between adjacent sounds. We recently ran a set of cross-sectional and longitudinal experiments showing that merging music and speech information in song facilitates stream segmentation and, further, that musical practice enhances sensitivity to statistical regularities in speech at both neural and behavioral levels. Based on recent findings showing the involvement of a fronto-temporal network in speech segmentation, we defend the idea that enhanced auditory learning observed in musicians originates via at least three distinct pathways: enhanced low-level auditory processing, enhanced phono-articulatory mapping via the left Inferior Frontal Gyrus and Pre-Motor cortex and increased functional connectivity within the audio-motor network. Finally, we discuss how these data predict a beneficial use of music for optimizing speech acquisition in both normal and impaired populations. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. Stellar Streams Discovered in the Dark Energy Survey

    Energy Technology Data Exchange (ETDEWEB)

    Shipp, N.; et al.

    2018-01-09

    We perform a search for stellar streams around the Milky Way using the first three years of multi-band optical imaging data from the Dark Energy Survey (DES). We use DES data covering $\\sim 5000$ sq. deg. to a depth of $g > 23.5$ with a relative photometric calibration uncertainty of $< 1 \\%$. This data set yields unprecedented sensitivity to the stellar density field in the southern celestial hemisphere, enabling the detection of faint stellar streams to a heliocentric distance of $\\sim 50$ kpc. We search for stellar streams using a matched-filter in color-magnitude space derived from a synthetic isochrone of an old, metal-poor stellar population. Our detection technique recovers four previously known thin stellar streams: Phoenix, ATLAS, Tucana III, and a possible extension of Molonglo. In addition, we report the discovery of eleven new stellar streams. In general, the new streams detected by DES are fainter, more distant, and lower surface brightness than streams detected by similar techniques in previous photometric surveys. As a by-product of our stellar stream search, we find evidence for extra-tidal stellar structure associated with four globular clusters: NGC 288, NGC 1261, NGC 1851, and NGC 1904. The ever-growing sample of stellar streams will provide insight into the formation of the Galactic stellar halo, the Milky Way gravitational potential, as well as the large- and small-scale distribution of dark matter around the Milky Way.

  5. Assessing the aging effect on auditory-verbal memory by Persian version of dichotic auditory verbal memory test

    Directory of Open Access Journals (Sweden)

    Zahra Shahidipour

    2014-01-01

    Conclusion: Based on the obtained results, significant reduction in auditory memory was seen in aged group and the Persian version of dichotic auditory-verbal memory test, like many other auditory verbal memory tests, showed the aging effects on auditory verbal memory performance.

  6. Effects of an Auditory Lateralization Training in Children Suspected to Central Auditory Processing Disorder

    OpenAIRE

    Lotfi, Yones; Moosavi, Abdollah; Abdollahi, Farzaneh Zamiri; BAKHSHI, Enayatollah; Sadjedi, Hamed

    2016-01-01

    Background and Objectives Central auditory processing disorder [(C)APD] refers to a deficit in auditory stimuli processing in nervous system that is not due to higher-order language or cognitive factors. One of the problems in children with (C)APD is spatial difficulties which have been overlooked despite their significance. Localization is an auditory ability to detect sound sources in space and can help to differentiate between the desired speech from other simultaneous sound sources. Aim o...

  7. Use of auditory learning to manage listening problems in children

    OpenAIRE

    Moore, David R.; Halliday, Lorna F.; Amitay, Sygal

    2008-01-01

    This paper reviews recent studies that have used adaptive auditory training to address communication problems experienced by some children in their everyday life. It considers the auditory contribution to developmental listening and language problems and the underlying principles of auditory learning that may drive further refinement of auditory learning applications. Following strong claims that language and listening skills in children could be improved by auditory learning, researchers hav...

  8. AUDITORY REACTION TIME IN BASKETBALL PLAYERS AND HEALTHY CONTROLS

    OpenAIRE

    Ghuntla Tejas P.; Mehta Hemant B.; Gokhale Pradnya A.; Shah Chinmay J.

    2013-01-01

    Reaction is purposeful voluntary response to different stimuli as visual or auditory stimuli. Auditory reaction time is time required to response to auditory stimuli. Quickness of response is very important in games like basketball. This study was conducted to compare auditory reaction time of basketball players and healthy controls. The auditory reaction time was measured by the reaction time instrument in healthy controls and basketball players. Simple reaction time and choice reaction time...

  9. Live and online: using co-streaming to reach users.

    Science.gov (United States)

    Handler, Lara

    2011-01-01

    The increase in distance education students and the changing preferences for online instruction led the Health Sciences Library to seek creative approaches to traditional classroom instruction. Library instructors compared two different class formats: online-only classes and in-person classes with online sections. The second format, called "co-streaming," provided instruction in traditional classroom and virtual environments at the same time. A postclass survey was used to gather users' evaluations of the instruction and the format via which it was offered. This paper examines the user response to, and satisfaction with, the co-streaming classes.

  10. First Branchial Cleft Fistula Associated with External Auditory Canal Stenosis and Middle Ear Cholesteatoma

    Directory of Open Access Journals (Sweden)

    shahin abdollahi fakhim

    2014-10-01

    Full Text Available Introduction: First branchial cleft anomalies manifest with duplication of the external auditory canal.   Case Report: This report features a rare case of microtia and congenital middle ear and canal cholesteatoma with first branchial fistula. External auditory canal stenosis was complicated by middle ear and external canal cholesteatoma, but branchial fistula, opening in the zygomatic root and a sinus in the helical root, may explain this feature. A canal wall down mastoidectomy with canaloplasty and wide meatoplasty was performed. The branchial cleft was excised through parotidectomy and facial nerve dissection.   Conclusion:  It should be considered that canal stenosis in such cases can induce cholesteatoma formation in the auditory canal and middle ear.

  11. Role of the right inferior parietal cortex in auditory selective attention: An rTMS study.

    Science.gov (United States)

    Bareham, Corinne A; Georgieva, Stanimira D; Kamke, Marc R; Lloyd, David; Bekinschtein, Tristan A; Mattingley, Jason B

    2018-02-01

    Selective attention is the process of directing limited capacity resources to behaviourally relevant stimuli while ignoring competing stimuli that are currently irrelevant. Studies in healthy human participants and in individuals with focal brain lesions have suggested that the right parietal cortex is crucial for resolving competition for attention. Following right-hemisphere damage, for example, patients may have difficulty reporting a brief, left-sided stimulus if it occurs with a competitor on the right, even though the same left stimulus is reported normally when it occurs alone. Such "extinction" of contralesional stimuli has been documented for all the major sense modalities, but it remains unclear whether its occurrence reflects involvement of one or more specific subregions of the temporo-parietal cortex. Here we employed repetitive transcranial magnetic stimulation (rTMS) over the right hemisphere to examine the effect of disruption of two candidate regions - the supramarginal gyrus (SMG) and the superior temporal gyrus (STG) - on auditory selective attention. Eighteen neurologically normal, right-handed participants performed an auditory task, in which they had to detect target digits presented within simultaneous dichotic streams of spoken distractor letters in the left and right channels, both before and after 20 min of 1 Hz rTMS over the SMG, STG or a somatosensory control site (S1). Across blocks, participants were asked to report on auditory streams in the left, right, or both channels, which yielded focused and divided attention conditions. Performance was unchanged for the two focused attention conditions, regardless of stimulation site, but was selectively impaired for contralateral left-sided targets in the divided attention condition following stimulation of the right SMG, but not the STG or S1. Our findings suggest a causal role for the right inferior parietal cortex in auditory selective attention. Copyright © 2017 Elsevier Ltd. All rights

  12. [Assessment of the efficiency of the auditory training in children with dyslalia and auditory processing disorders].

    Science.gov (United States)

    Włodarczyk, Elżbieta; Szkiełkowska, Agata; Skarżyński, Henryk; Piłka, Adam

    2011-01-01

    To assess effectiveness of the auditory training in children with dyslalia and central auditory processing disorders. Material consisted of 50 children aged 7-9-years-old. Children with articulation disorders stayed under long-term speech therapy care in the Auditory and Phoniatrics Clinic. All children were examined by a laryngologist and a phoniatrician. Assessment included tonal and impedance audiometry and speech therapists' and psychologist's consultations. Additionally, a set of electrophysiological examinations was performed - registration of N2, P2, N2, P2, P300 waves and psychoacoustic test of central auditory functions: FPT - frequency pattern test. Next children took part in the regular auditory training and attended speech therapy. Speech assessment followed treatment and therapy, again psychoacoustic tests were performed and P300 cortical potentials were recorded. After that statistical analyses were performed. Analyses revealed that application of auditory training in patients with dyslalia and other central auditory disorders is very efficient. Auditory training may be a very efficient therapy supporting speech therapy in children suffering from dyslalia coexisting with articulation and central auditory disorders and in children with educational problems of audiogenic origin. Copyright © 2011 Polish Otolaryngology Society. Published by Elsevier Urban & Partner (Poland). All rights reserved.

  13. Auditory motion capturing ambiguous visual motion

    Directory of Open Access Journals (Sweden)

    Arjen eAlink

    2012-01-01

    Full Text Available In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left- or rightwards auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left- or rightwards more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

  14. Reconstructing speech from human auditory cortex.

    Directory of Open Access Journals (Sweden)

    Brian N Pasley

    2012-01-01

    Full Text Available How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.

  15. Auditory Neuropathy Spectrum Disorder (ANSD) (For Parents)

    Science.gov (United States)

    ... to the inner row of hair cells or synapses between the inner hair cells and the auditory ... any other nerve-related problems. Ongoing speech and language testing . A child with ANSD needs regular visits ...

  16. Auditory Feedback and the Online Shopping Experience

    National Research Council Canada - National Science Library

    Ryann Reynolds-McIlnay

    2014-01-01

      The present research proposes that the presence of auditory feedback increases satisfaction with the shopping experience, confidence in the retailer, and the likelihood to return to the retailer...

  17. Environment for Auditory Research Facility (EAR)

    Data.gov (United States)

    Federal Laboratory Consortium — EAR is an auditory perception and communication research center enabling state-of-the-art simulation of various indoor and outdoor acoustic environments. The heart...

  18. Childhood trauma and auditory verbal hallucinations

    NARCIS (Netherlands)

    Daalman, K.; Diederen, K. M. J.; Derks, E. M.; van Lutterveld, R.; Kahn, R. S.; Sommer, Iris E. C.

    2012-01-01

    Background. Hallucinations have consistently been associated with traumatic experiences during childhood. This association appears strongest between physical and sexual abuse and auditory verbal hallucinations (AVH). It remains unclear whether traumatic experiences mainly colour the content of AVH

  19. Presbycusis and auditory brainstem responses: a review

    Directory of Open Access Journals (Sweden)

    Shilpa Khullar

    2011-06-01

    Full Text Available Age-related hearing loss or presbycusis is a complex phenomenon consisting of elevation of hearing levels as well as changes in the auditory processing. It is commonly classified into four categories depending on the cause. Auditory brainstem responses (ABRs are a type of early evoked potentials recorded within the first 10 ms of stimulation. They represent the synchronized activity of the auditory nerve and the brainstem. Some of the changes that occur in the aging auditory system may significantly influence the interpretation of the ABRs in comparison with the ABRs of the young adults. The waves of ABRs are described in terms of amplitude, latencies and interpeak latency of the different waves. There is a tendency of the amplitude to decrease and the absolute latencies to increase with advancing age but these trends are not always clear due to increase in threshold with advancing age that act a major confounding factor in the interpretation of ABRs.

  20. Auditory stimulation and cardiac autonomic regulation

    Directory of Open Access Journals (Sweden)

    Vitor E. Valenti

    2012-08-01

    Full Text Available Previous studies have already demonstrated that auditory stimulation with music influences the cardiovascular system. In this study, we described the relationship between musical auditory stimulation and heart rate variability. Searches were performed with the Medline, SciELO, Lilacs and Cochrane databases using the following keywords: "auditory stimulation", "autonomic nervous system", "music" and "heart rate variability". The selected studies indicated that there is a strong correlation between noise intensity and vagal-sympathetic balance. Additionally, it was reported that music therapy improved heart rate variability in anthracycline-treated breast cancer patients. It was hypothesized that dopamine release in the striatal system induced by pleasurable songs is involved in cardiac autonomic regulation. Musical auditory stimulation influences heart rate variability through a neural mechanism that is not well understood. Further studies are necessary to develop new therapies to treat cardiovascular disorders.

  1. Effect of omega-3 on auditory system

    Directory of Open Access Journals (Sweden)

    Vida Rahimi

    2014-01-01

    Full Text Available Background and Aim: Omega-3 fatty acid have structural and biological roles in the body 's various systems . Numerous studies have tried to research about it. Auditory system is affected a s well. The aim of this article was to review the researches about the effect of omega-3 on auditory system.Methods: We searched Medline , Google Scholar, PubMed, Cochrane Library and SID search engines with the "auditory" and "omega-3" keywords and read textbooks about this subject between 19 70 and 20 13.Conclusion: Both excess and deficient amounts of dietary omega-3 fatty acid can cause harmful effects on fetal and infant growth and development of brain and central nervous system esspesially auditory system. It is important to determine the adequate dosage of omega-3.

  2. Brain correlates of the orientation of auditory spatial attention onto speaker location in a "cocktail-party" situation.

    Science.gov (United States)

    Lewald, Jörg; Hanenberg, Christina; Getzmann, Stephan

    2016-10-01

    Successful speech perception in complex auditory scenes with multiple competing speakers requires spatial segregation of auditory streams into perceptually distinct and coherent auditory objects and focusing of attention toward the speaker of interest. Here, we focused on the neural basis of this remarkable capacity of the human auditory system and investigated the spatiotemporal sequence of neural activity within the cortical network engaged in solving the "cocktail-party" problem. Twenty-eight subjects localized a target word in the presence of three competing sound sources. The analysis of the ERPs revealed an anterior contralateral subcomponent of the N2 (N2ac), computed as the difference waveform for targets to the left minus targets to the right. The N2ac peaked at about 500 ms after stimulus onset, and its amplitude was correlated with better localization performance. Cortical source localization for the contrast of left versus right targets at the time of the N2ac revealed a maximum in the region around left superior frontal sulcus and frontal eye field, both of which are known to be involved in processing of auditory spatial information. In addition, a posterior-contralateral late positive subcomponent (LPCpc) occurred at a latency of about 700 ms. Both these subcomponents are potential correlates of allocation of spatial attention to the target under cocktail-party conditions. © 2016 Society for Psychophysiological Research.

  3. Stream Water Quality Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — QUAL2K (or Q2K) is a river and stream water quality model that is intended to represent a modernized version of the QUAL2E (or Q2E) model (Brown and Barnwell 1987).

  4. Numerical Modelling of Streams

    DEFF Research Database (Denmark)

    Vestergaard, Kristian

    In recent years there has been a sharp increase in the use of numerical water quality models. Numeric water quality modeling can be divided into three steps: Hydrodynamic modeling for the determination of stream flow and water levels. Modelling of transport and dispersion of a conservative dissol...... dissolved substance. Modeling of chemical and biological turnover of substances....

  5. Streaming-video produktion

    DEFF Research Database (Denmark)

    Grønkjær, Poul

    2004-01-01

     E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele produktionsf...... E-learning Lab på Aalborg Universitet har i forbindelse med forskningsprojektet Virtuelle Læringsformer og Læringsmiljøer foretaget en række praktiske eksperimenter med streaming-video produktioner. Hensigten med denne artikel er at formidle disse erfaringer. Artiklen beskriver hele...... produktionsforløbet: fra ide til færdigt produkt, forskellige typer af præsentationer, dramaturgiske overvejelser samt en konceptskitse. Streaming-video teknologien er nu så udviklet med et så tilfredsstillende audiovisuelt udtryk at vi kan begynde at fokusere på, hvilket indhold der er velegnet til at blive gjort...... tilgængeligt uafhængigt af tid og sted. Afslutningsvis er der en række kildehenvisninger, blandt andet en oversigt over de streaming-video produktioner, som denne artikel bygger på....

  6. The Rabbit Stream Cipher

    DEFF Research Database (Denmark)

    Boesgaard, Martin; Vesterager, Mette; Zenner, Erik

    2008-01-01

    The stream cipher Rabbit was first presented at FSE 2003, and no attacks against it have been published until now. With a measured encryption/decryption speed of 3.7 clock cycles per byte on a Pentium III processor, Rabbit does also provide very high performance. This paper gives a concise...... description of the Rabbit design and some of the cryptanalytic results available....

  7. Music Streaming in Denmark

    DEFF Research Database (Denmark)

    Pedersen, Rasmus Rex

    This report analyses how a ’per user’ settlement model differs from the ‘pro rata’ model currently used. The analysis is based on data for all streams by WiMP users in Denmark during August 2013. The analysis has been conducted in collaboration with Christian Schlelein from Koda on the basis of d...

  8. Giant Intergalactic Gas Stream Longer Than Thought

    Science.gov (United States)

    2010-01-01

    . "The new age of the stream puts its beginning at about the time when the two Magellanic Clouds may have passed close to each other, triggering massive bursts of star formation," Nidever explained. "The strong stellar winds and supernova explosions from that burst of star formation could have blown out the gas and started it flowing toward the Milky Way," he said. "This fits nicely with some of our earlier work that showed evidence for just such blowouts in the Magellanic Clouds," said Steven Majewski, of the University of Virginia. Earlier explanations for the stream's cause required the Magellanic Clouds to pass much closer to the Milky Way, but recent orbital simulations have cast doubt on such mechanisms. Nidever and Majewski worked with Butler Burton of the Leiden Observatory and the National Radio Astronomy Observatory, and Lou Nigra of the University of Wisconsin. In addition to presenting the results to the American Astronomical Society, the scientists have submitted a paper to the Astrophysical Journal.

  9. Riparian deforestation, stream narrowing, and loss of stream ecosystem services

    OpenAIRE

    Sweeney, Bernard W.; Bott, Thomas L.; Jackson, John K.; Kaplan, Louis A.; Newbold, J. Denis; Standley, Laurel J.; Hession, W. Cully; Horwitz, Richard J.

    2004-01-01

    A study of 16 streams in eastern North America shows that riparian deforestation causes channel narrowing, which reduces the total amount of stream habitat and ecosystem per unit channel length and compromises in-stream processing of pollutants. Wide forest reaches had more macroinvertebrates, total ecosystem processing of organic matter, and nitrogen uptake per unit channel length than contiguous narrow deforested reaches. Stream narrowing nullified any potential advantages of deforestation ...

  10. Auditory memory function in expert chess players

    OpenAIRE

    Fattahi, Fariba; Geshani, Ahmad; Jafari, Zahra; Jalaie, Shohreh; Salman Mahini, Mona

    2015-01-01

    Background: Chess is a game that involves many aspects of high level cognition such as memory, attention, focus and problem solving. Long term practice of chess can improve cognition performances and behavioral skills. Auditory memory, as a kind of memory, can be influenced by strengthening processes following long term chess playing like other behavioral skills because of common processing pathways in the brain. The purpose of this study was to evaluate the auditory memory function of expert...

  11. Mental concerts: musical imagery and auditory cortex.

    Science.gov (United States)

    Zatorre, Robert J; Halpern, Andrea R

    2005-07-07

    Most people intuitively understand what it means to "hear a tune in your head." Converging evidence now indicates that auditory cortical areas can be recruited even in the absence of sound and that this corresponds to the phenomenological experience of imagining music. We discuss these findings as well as some methodological challenges. We also consider the role of core versus belt areas in musical imagery, the relation between auditory and motor systems during imagery of music performance, and practical implications of this research.

  12. [Auditory performance analyses of cochlear implanted patients].

    Science.gov (United States)

    Ozdemir, Süleyman; Kıroğlu, Mete; Tuncer, Ulkü; Sahin, Rasim; Tarkan, Ozgür; Sürmelioğlu, Ozgür

    2011-01-01

    The aim of this study was to analyze the auditory performance development of cochlear implanted patients. The effects of age at implantation, gender, implanted ear and model of the cochlear implant on the patients' auditory performance were investigated. Twenty-eight patients (12 boys, 16 girls) with congenital prelingual hearing loss who underwent cochlear implant surgery at our clinic and a follow-up of at least 18 months were selected for the study. Listening Progress Profile (LiP), Monosyllable-Trochee-Polysyllable (MTP) and Meaningful Auditory Integration Scale (MAIS) tests were performed to analyze the auditory performances of the patients. To determine the effect of the age at implantation on the auditory performance, patients were assigned into two groups: group 1 (implantation age = or <60 months, mean 44.8 months) and group 2 (implantation age = or <60 months, mean 100.6 months). Group 2 had higher preoperative test scores than group 1 but after cochlear implant use, the auditory performance levels of the patients in group 1 improved faster and equalized to those of the patients in group 2 after 12-18 months. Our data showed that variables such as sex, implanted ear or model of the cochlear implant did not have any statistically significant effect on the auditory performance of the patients after cochlear implantation. We found a negative correlation between the implantation age and the auditory performance improvement in our study. We observed that children implanted at young age had a quicker language development and have had more success in reading, writing and other educational skills in the future.

  13. Streaming GNSS Data over the Internet

    Science.gov (United States)

    Gebhard, H.; Weber, G.; Dettmering, D.; Groeschel, M.

    2003-04-01

    Due to the increased capacity of the Internet, applications which transfer continuous data-streams by IP-packages, such as Internet Radio or Internet Video-on-Demand, have become well-established services. Growing mobile IP-Networks like GSM, GPRS, EDGE, or UMTS furthermore allow the mobile use of these real-time services. Compared to Multimedia applications, the bandwidth required for streaming GNSS data is relatively small. As a consequence, the global Internet can be used for the real-time collection and exchange of GNSS data, as well as for broadcasting derived differential products. Introducing the real time streaming of GNSS data via Internet as a professional service is demanding with respect to network transparency, network security, program stability, access control, remote administration, scalability and client simplicity. This paper will discuss several possible technical solutions: Unicast vs. IP-Multicast, TCP vs. UDP, Client/Server vs. Client/Server/Splitter technologies. Based on this discussion, a novel HTTP-based technique for streaming GNSS data to mobile clients over the Internet is introduced. It allows simultaneous access of a large number of PDAs, Laptops, or GNSS receivers to a broadcasting host via Mobile IP-Networks. The technique establishes a format called "Networked Transport of RTCM via Internet Protocol" (Ntrip), due to its main application being the dissemination of differential GNSS corrections in the popular RTCM-104 streaming format. Sufficient precision is obtained if data are not older than a few seconds. As the RTCM standard is used worldwide, most GNSS receivers accept it. This paper also focuses on system, implementation, and availability aspects of Ntrip-based differential GNSS services. The Ntrip components (NtripSources, NtripServers, NtripCaster, NtripClients) will be introduced, and software implementations will be described.

  14. Faster Sound Stream Segmentation in Musicians than in Nonmusicians

    Science.gov (United States)

    François, Clément; Jaillet, Florent; Takerkart, Sylvain; Schön, Daniele

    2014-01-01

    The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general. PMID:25014068

  15. Faster sound stream segmentation in musicians than in nonmusicians.

    Directory of Open Access Journals (Sweden)

    Clément François

    Full Text Available The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.

  16. Glial cell contributions to auditory brainstem development

    Directory of Open Access Journals (Sweden)

    Karina S Cramer

    2016-10-01

    Full Text Available Glial cells, previously thought to have generally supporting roles in the central nervous system, are emerging as essential contributors to multiple aspects of neuronal circuit function and development. This review focuses on the contributions of glial cells to the development of specialized auditory pathways in the brainstem. These pathways display specialized synapses and an unusually high degree of precision in circuitry that enables sound source localization. The development of these pathways thus requires highly coordinated molecular and cellular mechanisms. Several classes of glial cells, including astrocytes, oligodendrocytes, and microglia, have now been explored in these circuits in both avian and mammalian brainstems. Distinct populations of astrocytes are found over the course of auditory brainstem maturation. Early appearing astrocytes are associated with spatial compartments in the avian auditory brainstem. Factors from late appearing astrocytes promote synaptogenesis and dendritic maturation, and astrocytes remain integral parts of specialized auditory synapses. Oligodendrocytes play a unique role in both birds and mammals in highly regulated myelination essential for proper timing to decipher interaural cues. Microglia arise early in brainstem development and may contribute to maturation of auditory pathways. Together these studies demonstrate the importance of non-neuronal cells in the assembly of specialized auditory brainstem circuits.

  17. Long Latency Auditory Evoked Potentials during Meditation.

    Science.gov (United States)

    Telles, Shirley; Deepeshwar, Singh; Naveen, Kalkuni Visweswaraiah; Pailoor, Subramanya

    2015-10-01

    The auditory sensory pathway has been studied in meditators, using midlatency and short latency auditory evoked potentials. The present study evaluated long latency auditory evoked potentials (LLAEPs) during meditation. Sixty male participants, aged between 18 and 31 years (group mean±SD, 20.5±3.8 years), were assessed in 4 mental states based on descriptions in the traditional texts. They were (a) random thinking, (b) nonmeditative focusing, (c) meditative focusing, and (d) meditation. The order of the sessions was randomly assigned. The LLAEP components studied were P1 (40-60 ms), N1 (75-115 ms), P2 (120-180 ms), and N2 (180-280 ms). For each component, the peak amplitude and peak latency were measured from the prestimulus baseline. There was significant decrease in the peak latency of the P2 component during and after meditation (Pmeditation facilitates the processing of information in the auditory association cortex, whereas the number of neurons recruited was smaller in random thinking and non-meditative focused thinking, at the level of the secondary auditory cortex, auditory association cortex and anterior cingulate cortex. © EEG and Clinical Neuroscience Society (ECNS) 2014.

  18. Speech Evoked Auditory Brainstem Response in Stuttering

    Directory of Open Access Journals (Sweden)

    Ali Akbar Tahaei

    2014-01-01

    Full Text Available Auditory processing deficits have been hypothesized as an underlying mechanism for stuttering. Previous studies have demonstrated abnormal responses in subjects with persistent developmental stuttering (PDS at the higher level of the central auditory system using speech stimuli. Recently, the potential usefulness of speech evoked auditory brainstem responses in central auditory processing disorders has been emphasized. The current study used the speech evoked ABR to investigate the hypothesis that subjects with PDS have specific auditory perceptual dysfunction. Objectives. To determine whether brainstem responses to speech stimuli differ between PDS subjects and normal fluent speakers. Methods. Twenty-five subjects with PDS participated in this study. The speech-ABRs were elicited by the 5-formant synthesized syllable/da/, with duration of 40 ms. Results. There were significant group differences for the onset and offset transient peaks. Subjects with PDS had longer latencies for the onset and offset peaks relative to the control group. Conclusions. Subjects with PDS showed a deficient neural timing in the early stages of the auditory pathway consistent with temporal processing deficits and their abnormal timing may underlie to their disfluency.

  19. Investigating bottom-up auditory attention

    Directory of Open Access Journals (Sweden)

    Emine Merve Kaya

    2014-05-01

    Full Text Available Bottom-up attention is a sensory-driven selection mechanism that directs perception towards a subset of the stimulus that is considered salient, or attention-grabbing. Most studies of bottom-up auditory attention have adapted frameworks similar to visual attention models whereby local or global contrast is a central concept in defining salient elements in a scene. In the current study, we take a more fundamental approach to modeling auditory attention; providing the first examination of the space of auditory saliency spanning pitch, intensity and timbre; and shedding light on complex interactions among these features. Informed by psychoacoustic results, we develop a computational model of auditory saliency implementing a novel attentional framework, guided by processes hypothesized to take place in the auditory pathway. In particular, the model tests the hypothesis that perception tracks the evolution of sound events in a multidimensional feature space, and flags any deviation from background statistics as salient. Predictions from the model corroborate the relationship between bottom-up auditory attention and statistical inference, and argues for a potential role of predictive coding as mechanism for saliency detection in acoustic scenes.

  20. Brain activity during auditory and visual phonological, spatial and simple discrimination tasks.

    Science.gov (United States)

    Salo, Emma; Rinne, Teemu; Salonen, Oili; Alho, Kimmo

    2013-02-16

    We used functional magnetic resonance imaging to measure human brain activity during tasks demanding selective attention to auditory or visual stimuli delivered in concurrent streams. Auditory stimuli were syllables spoken by different voices and occurring in central or peripheral space. Visual stimuli were centrally or more peripherally presented letters in darker or lighter fonts. The participants performed a phonological, spatial or "simple" (speaker-gender or font-shade) discrimination task in either modality. Within each modality, we expected a clear distinction between brain activations related to nonspatial and spatial processing, as reported in previous studies. However, within each modality, different tasks activated largely overlapping areas in modality-specific (auditory and visual) cortices, as well as in the parietal and frontal brain regions. These overlaps may be due to effects of attention common for all three tasks within each modality or interaction of processing task-relevant features and varying task-irrelevant features in the attended-modality stimuli. Nevertheless, brain activations caused by auditory and visual phonological tasks overlapped in the left mid-lateral prefrontal cortex, while those caused by the auditory and visual spatial tasks overlapped in the inferior parietal cortex. These overlapping activations reveal areas of multimodal phonological and spatial processing. There was also some evidence for intermodal attention-related interaction. Most importantly, activity in the superior temporal sulcus elicited by unattended speech sounds was attenuated during the visual phonological task in comparison with the other visual tasks. This effect might be related to suppression of processing irrelevant speech presumably distracting the phonological task involving the letters. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Small Streams - 50 ft Setback

    Data.gov (United States)

    Vermont Center for Geographic Information — This dataset is streams extracted from the VHD that have a drainage area of less than two square miles. These streams are given a simple 50-foot setback from top of...

  2. Query Processing on Data Streams

    OpenAIRE

    Stegmaier, Bernhard

    2007-01-01

    Data stream processing is currently gaining importance due to the rapid increase in data volumes and developments in novel application areas like e-science, e-health, and e-business. In this thesis, we propose an architecture for a data stream management system and investigate methods for query processing on data streams in such systems. In contrast to traditional database management systems (DBMSs), queries on data streams constitute continuous subscriptions for retrieving interesting data r...

  3. Analysis of hydraulic characteristics for stream diversion in small stream

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Sang-Jin; Jun, Kye-Won [Chungbuk National University, Cheongju(Korea)

    2001-10-31

    This study is the analysis of hydraulic characteristics for stream diversion reach by numerical model test. Through it we can provide the basis data in flood, and in grasping stream flow characteristics. Analysis of hydraulic characteristics in Seoknam stream were implemented by using computer model HEC-RAS(one-dimensional model) and RMA2(two-dimensional finite element model). As a result we became to know that RMA2 to simulate left, main channel, right in stream is more effective method in analysing flow in channel bends, steep slope, complex bed form effect stream flow characteristics, than HEC-RAS. (author). 13 refs., 3 tabs., 5 figs.

  4. Predictive coding of visual-auditory and motor-auditory events: An electrophysiological study

    NARCIS (Netherlands)

    Stekelenburg, J.J.; Vroomen, J.

    2015-01-01

    The amplitude of auditory components of the event-related potential (ERP) is attenuated when sounds are self-generated compared to externally generated sounds. This effect has been ascribed to internal forward modals predicting the sensory consequences of one’s own motor actions. Auditory potentials

  5. Amygdala and auditory cortex exhibit distinct sensitivity to relevant acoustic features of auditory emotions.

    Science.gov (United States)

    Pannese, Alessia; Grandjean, Didier; Frühholz, Sascha

    2016-12-01

    Discriminating between auditory signals of different affective value is critical to successful social interaction. It is commonly held that acoustic decoding of such signals occurs in the auditory system, whereas affective decoding occurs in the amygdala. However, given that the amygdala receives direct subcortical projections that bypass the auditory cortex, it is possible that some acoustic decoding occurs in the amygdala as well, when the acoustic features are relevant for affective discrimination. We tested this hypothesis by combining functional neuroimaging with the neurophysiological phenomena of repetition suppression (RS) and repetition enhancement (RE) in human listeners. Our results show that both amygdala and auditory cortex responded differentially to physical voice features, suggesting that the amygdala and auditory cortex decode the affective quality of the voice not only by processing the emotional content from previously processed acoustic features, but also by processing the acoustic features themselves, when these are relevant to the identification of the voice's affective value. Specifically, we found that the auditory cortex is sensitive to spectral high-frequency voice cues when discriminating vocal anger from vocal fear and joy, whereas the amygdala is sensitive to vocal pitch when discriminating between negative vocal emotions (i.e., anger and fear). Vocal pitch is an instantaneously recognized voice feature, which is potentially transferred to the amygdala by direct subcortical projections. These results together provide evidence that, besides the auditory cortex, the amygdala too processes acoustic information, when this is relevant to the discrimination of auditory emotions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Stellar Streams in the Dark Energy Survey

    Science.gov (United States)

    Shipp, Nora; Drlica-Wagner, Alex; Balbinot, Eduardo; DES Collaboration

    2018-01-01

    We present a search for Galactic stellar streams in the Dark Energy Survey (DES), using three years of optical data taken across 5000 sq. degrees of the southern sky. The wide-field, uniform DES photometry provides unprecedented sensitivity to the stellar density field in the southern hemisphere, allowing for the detection of faint stellar populations. We follow the “Field of Streams” procedure developed in the Sloan Digital Sky Survey (Belokurov et al., 2006) to identify stellar density features such as dwarf galaxies, globular clusters, and the stellar streams resulting from the tidal disruption of these objects. Improved analysis techniques applied to the DES data enable the discovery of new stellar streams, and provide added insight into the origin and stellar populations of previously identified objects. An increased sample size together with detailed characterization of individual stellar streams and their progenitors can inform our understanding of the formation of the Milky Way stellar halo, as well as the large and small scale distribution of dark matter in the Milky Way.

  7. Beaded streams of Arctic permafrost landscapes

    Science.gov (United States)

    Arp, C. D.; Whitman, M. S.; Jones, B. M.; Grosse, G.; Gaglioti, B. V.; Heim, K. C.

    2014-07-01

    Beaded streams are widespread in permafrost regions and are considered a common thermokarst landform. However, little is known about their distribution, how and under what conditions they form, and how their intriguing morphology translates to ecosystem functions and habitat. Here we report on a Circum-Arctic inventory of beaded streams and a watershed-scale analysis in northern Alaska using remote sensing and field studies. We mapped over 400 channel networks with beaded morphology throughout the continuous permafrost zone of northern Alaska, Canada, and Russia and found the highest abundance associated with medium- to high-ice content permafrost in moderately sloping terrain. In the Fish Creek watershed, beaded streams accounted for half of the drainage density, occurring primarily as low-order channels initiating from lakes and drained lake basins. Beaded streams predictably transition to alluvial channels with increasing drainage area and decreasing channel slope, although this transition is modified by local controls on water and sediment delivery. Comparison of one beaded channel using repeat photography between 1948 and 2013 indicate relatively stable form and 14C dating of basal sediments suggest channel formation may be as early as the Pleistocene-Holocene transition. Contemporary processes, such as deep snow accumulation in stream gulches effectively insulates river ice and allows for perennial liquid water below most beaded stream pools. Because of this, mean annual temperatures in pool beds are greater than 2 °C, leading to the development of perennial thaw bulbs or taliks underlying these thermokarst features. In the summer, some pools stratify thermally, which reduces permafrost thaw and maintains coldwater habitats. Snowmelt generated peak-flows decrease rapidly by two or more orders of magnitude to summer low flows with slow reach-scale velocity distributions ranging from 0.1 to 0.01 m s-1, yet channel runs still move water rapidly between pools

  8. Development of glutamatergic synaptic transmission in binaural auditory neurons.

    Science.gov (United States)

    Sanchez, Jason Tait; Wang, Yuan; Rubel, Edwin W; Barria, Andres

    2010-09-01

    Glutamatergic synaptic transmission is essential for binaural auditory processing in birds and mammals. Using whole cell voltage clamp recordings, we characterized the development of synaptic ionotropic glutamate receptor (iGluR) function from auditory neurons in the chick nucleus laminaris (NL), the first nucleus responsible for binaural processing. We show that synaptic transmission is mediated by AMPA- and N-methyl-d-aspartate (NMDA)-type glutamate receptors (AMPA-R and NMDA-R, respectively) when hearing is first emerging and dendritic morphology is being established across different sound frequency regions. Puff application of glutamate agonists at embryonic day 9 (E9) revealed that both iGluRs are functionally present prior to synapse formation (E10). Between E11 and E19, the amplitude of isolated AMPA-R currents from high-frequency (HF) neurons increased 14-fold. A significant increase in the frequency of spontaneous events is also observed. Additionally, AMPA-R currents become faster and more rectifying, suggesting developmental changes in subunit composition. These developmental changes were similar in all tonotopic regions examined. However, mid- and low-frequency neurons exhibit fewer spontaneous events and evoked AMPA-R currents are smaller, slower, and less rectifying than currents from age-matched HF neurons. The amplitude of isolated NMDA-R currents from HF neurons also increased, reaching a peak at E17 and declining sharply by E19, a trend consistent across tonotopic regions. With age, NMDA-R kinetics become significantly faster, indicating a developmental switch in receptor subunit composition. Dramatic increases in the amplitude and speed of glutamatergic synaptic transmission occurs in NL during embryonic development. These changes are first seen in HF neurons suggesting regulation by peripheral inputs and may be necessary to enhance coincidence detection of binaural auditory information.

  9. Auditory working memory predicts individual differences in absolute pitch learning.

    Science.gov (United States)

    Van Hedger, Stephen C; Heald, Shannon L M; Koch, Rachelle; Nusbaum, Howard C

    2015-07-01

    Absolute pitch (AP) is typically defined as the ability to label an isolated tone as a musical note in the absence of a reference tone. At first glance the acquisition of AP note categories seems like a perceptual learning task, since individuals must assign a category label to a stimulus based on a single perceptual dimension (pitch) while ignoring other perceptual dimensions (e.g., loudness, octave, instrument). AP, however, is rarely discussed in terms of domain-general perceptual learning mechanisms. This is because AP is typically assumed to depend on a critical period of development, in which early exposure to pitches and musical labels is thought to be necessary for the development of AP precluding the possibility of adult acquisition of AP. Despite this view of AP, several previous studies have found evidence that absolute pitch category learning is, to an extent, trainable in a post-critical period adult population, even if the performance typically achieved by this population is below the performance of a "true" AP possessor. The current studies attempt to understand the individual differences in learning to categorize notes using absolute pitch cues by testing a specific prediction regarding cognitive capacity related to categorization - to what extent does an individual's general auditory working memory capacity (WMC) predict the success of absolute pitch category acquisition. Since WMC has been shown to predict performance on a wide variety of other perceptual and category learning tasks, we predict that individuals with higher WMC should be better at learning absolute pitch note categories than individuals with lower WMC. Across two studies, we demonstrate that auditory WMC predicts the efficacy of learning absolute pitch note categories. These results suggest that a higher general auditory WMC might underlie the formation of absolute pitch categories for post-critical period adults. Implications for understanding the mechanisms that underlie the

  10. Autonomous Byte Stream Randomizer

    Science.gov (United States)

    Paloulian, George K.; Woo, Simon S.; Chow, Edward T.

    2013-01-01

    Net-centric networking environments are often faced with limited resources and must utilize bandwidth as efficiently as possible. In networking environments that span wide areas, the data transmission has to be efficient without any redundant or exuberant metadata. The Autonomous Byte Stream Randomizer software provides an extra level of security on top of existing data encryption methods. Randomizing the data s byte stream adds an extra layer to existing data protection methods, thus making it harder for an attacker to decrypt protected data. Based on a generated crypto-graphically secure random seed, a random sequence of numbers is used to intelligently and efficiently swap the organization of bytes in data using the unbiased and memory-efficient in-place Fisher-Yates shuffle method. Swapping bytes and reorganizing the crucial structure of the byte data renders the data file unreadable and leaves the data in a deconstructed state. This deconstruction adds an extra level of security requiring the byte stream to be reconstructed with the random seed in order to be readable. Once the data byte stream has been randomized, the software enables the data to be distributed to N nodes in an environment. Each piece of the data in randomized and distributed form is a separate entity unreadable on its own right, but when combined with all N pieces, is able to be reconstructed back to one. Reconstruction requires possession of the key used for randomizing the bytes, leading to the generation of the same cryptographically secure random sequence of numbers used to randomize the data. This software is a cornerstone capability possessing the ability to generate the same cryptographically secure sequence on different machines and time intervals, thus allowing this software to be used more heavily in net-centric environments where data transfer bandwidth is limited.

  11. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats and macroinver...

  12. Efficacy of auditory training in elderly subjects

    Directory of Open Access Journals (Sweden)

    Aline Albuquerque Morais

    2015-05-01

    Full Text Available Auditory training (AT  has been used for auditory rehabilitation in elderly individuals and is an effective tool for optimizing speech processing in this population. However, it is necessary to distinguish training-related improvements from placebo and test-retest effects. Thus, we investigated the efficacy of short-term auditory training (acoustically controlled auditory training - ACAT in elderly subjects through behavioral measures and P300. Sixteen elderly individuals with APD received an initial evaluation (evaluation 1 - E1 consisting of behavioral and electrophysiological tests (P300 evoked by tone burst and speech sounds to evaluate their auditory processing. The individuals were divided into two groups. The Active Control Group [ACG (n=8] underwent placebo training. The Passive Control Group [PCG (n=8] did not receive any intervention. After 12 weeks, the subjects were  revaluated (evaluation 2 - E2. Then, all of the subjects underwent ACAT. Following another 12 weeks (8 training sessions, they underwent the final evaluation (evaluation 3 – E3. There was no significant difference between E1 and E2 in the behavioral test [F(9.6=0,.6 p=0.92, λ de Wilks=0.65] or P300 [F(8.7=2.11, p=0.17, λ de Wilks=0.29] (discarding the presence of placebo effects and test-retest. A significant improvement was observed between the pre- and post-ACAT conditions (E2 and E3 for all auditory skills according to the behavioral methods [F(4.27=0.18, p=0.94, λ de Wilks=0.97]. However, the same result was not observed for P300 in any condition. There was no significant difference between P300 stimuli. The ACAT improved the behavioral performance of the elderly for all auditory skills and was an effective method for hearing rehabilitation.

  13. Auditory sustained field responses to periodic noise

    Directory of Open Access Journals (Sweden)

    Keceli Sumru

    2012-01-01

    Full Text Available Abstract Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.

  14. Identifying auditory attention with ear-EEG: cEEGrid versus high-density cap-EEG comparison

    Science.gov (United States)

    Bleichner, Martin G.; Mirkovic, Bojana; Debener, Stefan

    2016-12-01

    Objective. This study presents a direct comparison of a classical EEG cap setup with a new around-the-ear electrode array (cEEGrid) to gain a better understanding of the potential of ear-centered EEG. Approach. Concurrent EEG was recorded from a classical scalp EEG cap and two cEEGrids that were placed around the left and the right ear. Twenty participants performed a spatial auditory attention task in which three sound streams were presented simultaneously. The sound streams were three seconds long and differed in the direction of origin (front, left, right) and the number of beats (3, 4, 5 respectively), as well as the timbre and pitch. The participants had to attend to either the left or the right sound stream. Main results. We found clear attention modulated ERP effects reflecting the attended sound stream for both electrode setups, which agreed in morphology and effect size. A single-trial template matching classification showed that the direction of attention could be decoded significantly above chance (50%) for at least 16 out of 20 participants for both systems. The comparably high classification results of the single trial analysis underline the quality of the signal recorded with the cEEGrids. Significance. These findings are further evidence for the feasibility of around the-ear EEG recordings and demonstrate that well described ERPs can be measured. We conclude that concealed behind-the-ear EEG recordings can be an alternative to classical cap EEG acquisition for auditory attention monitoring.

  15. Spatial selective auditory attention in the presence of reverberant energy: individual differences in normal-hearing listeners.

    Science.gov (United States)

    Ruggles, Dorea; Shinn-Cunningham, Barbara

    2011-06-01

    Listeners can selectively attend to a desired target by directing attention to known target source features, such as location or pitch. Reverberation, however, reduces the reliability of the cues that allow a target source to be segregated and selected from a sound mixture. Given this, it is likely that reverberant energy interferes with selective auditory attention. Anecdotal reports suggest that the ability to focus spatial auditory attention degrades even with early aging, yet there is little evidence that middle-aged listeners have behavioral deficits on tasks requiring selective auditory attention. The current study was designed to look for individual differences in selective attention ability and to see if any such differences correlate with age. Normal-hearing adults, ranging in age from 18 to 55 years, were asked to report a stream of digits located directly ahead in a simulated rectangular room. Simultaneous, competing masker digit streams were simulated at locations 15° left and right of center. The level of reverberation was varied to alter task difficulty by interfering with localization cues (increasing localization blur). Overall, performance was best in the anechoic condition and worst in the high-reverberation condition. Listeners nearly always reported a digit from one of the three competing streams, showing that reverberation did not render the digits unintelligible. Importantly, inter-subject differences were extremely large. These differences, however, were not significantly correlated with age, memory span, or hearing status. These results show that listeners with audiometrically normal pure tone thresholds differ in their ability to selectively attend to a desired source, a task important in everyday communication. Further work is necessary to determine if these differences arise from differences in peripheral auditory function or in more central function.

  16. Auditory and motor imagery modulate learning in music performance

    Science.gov (United States)

    Brown, Rachel M.; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  17. Auditory and motor imagery modulate learning in music performance.

    Science.gov (United States)

    Brown, Rachel M; Palmer, Caroline

    2013-01-01

    Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians' encoding (during Learning, as they practiced novel melodies), and retrieval (during Recall of those melodies). Pianists learned melodies by listening without performing (auditory learning) or performing without sound (motor learning); following Learning, pianists performed the melodies from memory with auditory feedback (Recall). During either Learning (Experiment 1) or Recall (Experiment 2), pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced) and temporal regularity (variability of quarter-note interonset intervals) were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists' pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2). Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1): Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2): Higher auditory imagery skill predicted greater temporal regularity during Recall in the presence of

  18. Experience and information loss in auditory and visual memory.

    Science.gov (United States)

    Gloede, Michele E; Paulauskas, Emily E; Gregg, Melissa K

    2017-07-01

    Recent studies show that recognition memory for sounds is inferior to memory for pictures. Four experiments were conducted to examine the nature of auditory and visual memory. Experiments 1-3 were conducted to evaluate the role of experience in auditory and visual memory. Participants received a study phase with pictures/sounds, followed by a recognition memory test. Participants then completed auditory training with each of the sounds, followed by a second memory test. Despite auditory training in Experiments 1 and 2, visual memory was superior to auditory memory. In Experiment 3, we found that it is possible to improve auditory memory, but only after 3 days of specific auditory training and 3 days of visual memory decay. We examined the time course of information loss in auditory and visual memory in Experiment 4 and found a trade-off between visual and auditory recognition memory: Visual memory appears to have a larger capacity, while auditory memory is more enduring. Our results indicate that visual and auditory memory are inherently different memory systems and that differences in visual and auditory recognition memory performance may be due to the different amounts of experience with visual and auditory information, as well as structurally different neural circuitry specialized for information retention.

  19. Current status of auditory aging and anti-aging research.

    Science.gov (United States)

    Ruan, Qingwei; Ma, Cheng; Zhang, Ruxin; Yu, Zhuowei

    2014-01-01

    The development of presbycusis, or age-related hearing loss, is determined by a combination of genetic and environmental factors. The auditory periphery exhibits a progressive bilateral, symmetrical reduction of auditory sensitivity to sound from high to low frequencies. The central auditory nervous system shows symptoms of decline in age-related cognitive abilities, including difficulties in speech discrimination and reduced central auditory processing, ultimately resulting in auditory perceptual abnormalities. The pathophysiological mechanisms of presbycusis include excitotoxicity, oxidative stress, inflammation, aging and oxidative stress-induced DNA damage that results in apoptosis in the auditory pathway. However, the originating signals that trigger these mechanisms remain unclear. For instance, it is still unknown whether insulin is involved in auditory aging. Auditory aging has preclinical lesions, which manifest as asymptomatic loss of periphery auditory nerves and changes in the plasticity of the central auditory nervous system. Currently, the diagnosis of preclinical, reversible lesions depends on the detection of auditory impairment by functional imaging, and the identification of physiological and molecular biological markers. However, despite recent improvements in the application of these markers, they remain under-utilized in clinical practice. The application of antisenescent approaches to the prevention of auditory aging has produced inconsistent results. Future research will focus on the identification of markers for the diagnosis of preclinical auditory aging and the development of effective interventions. © 2013 Japan Geriatrics Society.

  20. Perceptual Plasticity for Auditory Object Recognition

    Science.gov (United States)

    Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.

    2017-01-01

    In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples

  1. Perceptual Plasticity for Auditory Object Recognition

    Directory of Open Access Journals (Sweden)

    Shannon L. M. Heald

    2017-05-01

    Full Text Available In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument, speaking (or playing rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we

  2. Facilitated auditory detection for speech sounds

    Directory of Open Access Journals (Sweden)

    Carine eSignoret

    2011-07-01

    Full Text Available If it is well known that knowledge facilitates higher cognitive functions, such as visual and auditory word recognition, little is known about the influence of knowledge on detection, particularly in the auditory modality. Our study tested the influence of phonological and lexical knowledge on auditory detection. Words, pseudo words and complex non phonological sounds, energetically matched as closely as possible, were presented at a range of presentation levels from sub threshold to clearly audible. The participants performed a detection task (Experiments 1 and 2 that was followed by a two alternative forced choice recognition task in Experiment 2. The results of this second task in Experiment 2 suggest a correct recognition of words in the absence of detection with a subjective threshold approach. In the detection task of both experiments, phonological stimuli (words and pseudo words were better detected than non phonological stimuli (complex sounds, presented close to the auditory threshold. This finding suggests an advantage of speech for signal detection. An additional advantage of words over pseudo words was observed in Experiment 2, suggesting that lexical knowledge could also improve auditory detection when listeners had to recognize the stimulus in a subsequent task. Two simulations of detection performance performed on the sound signals confirmed that the advantage of speech over non speech processing could not be attributed to energetic differences in the stimuli.

  3. The Harmonic Organization of Auditory Cortex

    Directory of Open Access Journals (Sweden)

    Xiaoqin eWang

    2013-12-01

    Full Text Available A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.

  4. The harmonic organization of auditory cortex

    Science.gov (United States)

    Wang, Xiaoqin

    2013-01-01

    A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds. PMID:24381544

  5. Effects of Caffeine on Auditory Brainstem Response

    Directory of Open Access Journals (Sweden)

    Saleheh Soleimanian

    2008-06-01

    Full Text Available Background and Aim: Blocking of the adenosine receptor in central nervous system by caffeine can lead to increasing the level of neurotransmitters like glutamate. As the adenosine receptors are present in almost all brain areas like central auditory pathway, it seems caffeine can change conduction in this way. The purpose of this study was to evaluate the effects of caffeine on latency and amplitude of auditory brainstem response(ABR.Materials and Methods: In this clinical trial study 43 normal 18-25 years old male students were participated. The subjects consumed 0, 2 and 3 mg/kg BW caffeine in three different sessions. Auditory brainstem responses were recorded before and 30 minute after caffeine consumption. The results were analyzed by Friedman and Wilcoxone test to assess the effects of caffeine on auditory brainstem response.Results: Compared to control group the latencies of waves III,V and I-V interpeak interval of the cases decreased significantly after 2 and 3mg/kg BW caffeine consumption. Wave I latency significantly decreased after 3mg/kg BW caffeine consumption(p<0.01. Conclusion: Increasing of the glutamate level resulted from the adenosine receptor blocking brings about changes in conduction in the central auditory pathway.

  6. The role of the auditory brainstem in processing musically-relevant pitch

    Directory of Open Access Journals (Sweden)

    Gavin M. Bidelman

    2013-05-01

    Full Text Available Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically-relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain.

  7. The Role of the Auditory Brainstem in Processing Musically Relevant Pitch

    Science.gov (United States)

    Bidelman, Gavin M.

    2013-01-01

    Neuroimaging work has shed light on the cerebral architecture involved in processing the melodic and harmonic aspects of music. Here, recent evidence is reviewed illustrating that subcortical auditory structures contribute to the early formation and processing of musically relevant pitch. Electrophysiological recordings from the human brainstem and population responses from the auditory nerve reveal that nascent features of tonal music (e.g., consonance/dissonance, pitch salience, harmonic sonority) are evident at early, subcortical levels of the auditory pathway. The salience and harmonicity of brainstem activity is strongly correlated with listeners’ perceptual preferences and perceived consonance for the tonal relationships of music. Moreover, the hierarchical ordering of pitch intervals/chords described by the Western music practice and their perceptual consonance is well-predicted by the salience with which pitch combinations are encoded in subcortical auditory structures. While the neural correlates of consonance can be tuned and exaggerated with musical training, they persist even in the absence of musicianship or long-term enculturation. As such, it is posited that the structural foundations of musical pitch might result from innate processing performed by the central auditory system. A neurobiological predisposition for consonant, pleasant sounding pitch relationships may be one reason why these pitch combinations have been favored by composers and listeners for centuries. It is suggested that important perceptual dimensions of music emerge well before the auditory signal reaches cerebral cortex and prior to attentional engagement. While cortical mechanisms are no doubt critical to the perception, production, and enjoyment of music, the contribution of subcortical structures implicates a more integrated, hierarchically organized network underlying music processing within the brain. PMID:23717294

  8. Case study: student perceptions of video streaming nursing class sessions.

    Science.gov (United States)

    Wall Parilo, Denise M; Parsh, Bridget

    2014-03-01

    Due to space constraints, students in their pediatric and obstetrical nursing courses received lecture in two formats: live lecture and video-streamed lecture. Live lecture is the traditional classroom format of live, in-person lecture without recording archives, whereas video streaming is live (synchronous) online lecture, also recorded for digital archive (asynchronous viewing). At the end of the semester, students (N = 53) responded to a survey about what they liked about both methods of instruction. Results reveal strengths of both methods and suggest ways to make both methods more effective. Copyright 2014, SLACK Incorporated.

  9. Source reliability in auditory health persuasion : Its antecedents and consequences

    NARCIS (Netherlands)

    Elbert, Sarah P.; Dijkstra, Arie

    2015-01-01

    Persuasive health messages can be presented through an auditory channel, thereby enhancing the salience of the source, making it fundamentally different from written or pictorial information. We focused on the determinants of perceived source reliability in auditory health persuasion by

  10. Intradermal melanocytic nevus of the external auditory canal.

    Science.gov (United States)

    Alves, Renato V; Brandão, Fabiano H; Aquino, José E P; Carvalho, Maria R M S; Giancoli, Suzana M; Younes, Eduado A P

    2005-01-01

    Intradermal nevi are common benign pigmented skin tumors. Their occurrence within the external auditory canal is uncommon. The clinical and pathologic features of an intradermal nevus arising within the external auditory canal are presented, and the literature reviewed.

  11. The effect of background music in auditory health persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    2013-01-01

    In auditory health persuasion, threatening information regarding health is communicated by voice only. One relevant context of auditory persuasion is the addition of background music. There are different mechanisms through which background music might influence persuasion, for example through mood

  12. Encoding of temporal information by timing, rate, and place in cat auditory cortex.

    Directory of Open Access Journals (Sweden)

    Kazuo Imaizumi

    2010-07-01

    Full Text Available A central goal in auditory neuroscience is to understand the neural coding of species-specific communication and human speech sounds. Low-rate repetitive sounds are elemental features of communication sounds, and core auditory cortical regions have been implicated in processing these information-bearing elements. Repetitive sounds could be encoded by at least three neural response properties: 1 the event-locked spike-timing precision, 2 the mean firing rate, and 3 the interspike interval (ISI. To determine how well these response aspects capture information about the repetition rate stimulus, we measured local group responses of cortical neurons in cat anterior auditory field (AAF to click trains and calculated their mutual information based on these different codes. ISIs of the multiunit responses carried substantially higher information about low repetition rates than either spike-timing precision or firing rate. Combining firing rate and ISI codes was synergistic and captured modestly more repetition information. Spatial distribution analyses showed distinct local clustering properties for each encoding scheme for repetition information indicative of a place code. Diversity in local processing emphasis and distribution of different repetition rate codes across AAF may give rise to concurrent feed-forward processing streams that contribute differently to higher-order sound analysis.

  13. Auditory filters at low-frequencies

    DEFF Research Database (Denmark)

    Orellana, Carlos Andrés Jurado; Pedersen, Christian Sejer; Møller, Henrik

    2009-01-01

    Prediction and assessment of low-frequency noise problems requires information about the auditory filter characteristics at low-frequencies. Unfortunately, data at low-frequencies is scarce and practically no results have been published for frequencies below 100 Hz. Extrapolation of ERB results...... from previous studies suggests the filter bandwidth keeps decreasing below 100 Hz, although at a relatively lower rate than at higher frequencies. Main characteristics of the auditory filter were studied from below 100 Hz up to 1000 Hz. Center frequencies evaluated were 50, 63, 125, 250, 500, and 1000...... Hz. The notched-noise method was used, with the noise masker at 40 dB spectral density. A rounded exponential auditory filter model (roex(p,r)) was used to fit the masking data. Preliminary data on 1 subject is discussed. Considering the system as a whole (e.g. without removing the assumed middle...

  14. Auditory Alterations in Children Infected by Human Immunodeficiency Virus Verified Through Auditory Processing Test.

    Science.gov (United States)

    Romero, Ana Carla Leite; Alfaya, Lívia Marangoni; Gonçales, Alina Sanches; Frizzo, Ana Claudia Figueiredo; Isaac, Myriam de Lima

    2017-01-01

    Introduction The auditory system of HIV-positive children may have deficits at various levels, such as the high incidence of problems in the middle ear that can cause hearing loss. Objective The objective of this study is to characterize the development of children infected by the Human Immunodeficiency Virus (HIV) in the Simplified Auditory Processing Test (SAPT) and the Staggered Spondaic Word Test. Methods We performed behavioral tests composed of the Simplified Auditory Processing Test and the Portuguese version of the Staggered Spondaic Word Test (SSW). The participants were 15 children infected by HIV, all using antiretroviral medication. Results The children had abnormal auditory processing verified by Simplified Auditory Processing Test and the Portuguese version of SSW. In the Simplified Auditory Processing Test, 60% of the children presented hearing impairment. In the SAPT, the memory test for verbal sounds showed more errors (53.33%); whereas in SSW, 86.67% of the children showed deficiencies indicating deficit in figure-ground, attention, and memory auditory skills. Furthermore, there are more errors in conditions of background noise in both age groups, where most errors were in the left ear in the Group of 8-year-olds, with similar results for the group aged 9 years. Conclusion The high incidence of hearing loss in children with HIV and comorbidity with several biological and environmental factors indicate the need for: 1) familiar and professional awareness of the impact on auditory alteration on the developing and learning of the children with HIV, and 2) access to educational plans and follow-up with multidisciplinary teams as early as possible to minimize the damage caused by auditory deficits.

  15. STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Fox, Geoffrey [Indiana Univ., Bloomington, IN (United States); Jha, Shantenu [Rutgers Univ., New Brunswick, NJ (United States); Ramakrishnan, Lavanya [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-10-01

    The Department of Energy (DOE) Office of Science (SC) facilities including accelerators, light sources and neutron sources and sensors that study, the environment, and the atmosphere, are producing streaming data that needs to be analyzed for next-generation scientific discoveries. There has been an explosion of new research and technologies for stream analytics arising from the academic and private sectors. However, there has been no corresponding effort in either documenting the critical research opportunities or building a community that can create and foster productive collaborations. The two-part workshop series, STREAM: Streaming Requirements, Experience, Applications and Middleware Workshop (STREAM2015 and STREAM2016), were conducted to bring the community together and identify gaps and future efforts needed by both NSF and DOE. This report describes the discussions, outcomes and conclusions from STREAM2016: Streaming Requirements, Experience, Applications and Middleware Workshop, the second of these workshops held on March 22-23, 2016 in Tysons, VA. STREAM2016 focused on the Department of Energy (DOE) applications, computational and experimental facilities, as well software systems. Thus, the role of “streaming and steering” as a critical mode of connecting the experimental and computing facilities was pervasive through the workshop. Given the overlap in interests and challenges with industry, the workshop had significant presence from several innovative companies and major contributors. The requirements that drive the proposed research directions, identified in this report, show an important opportunity for building competitive research and development program around streaming data. These findings and recommendations are consistent with vision outlined in NRC Frontiers of Data and National Strategic Computing Initiative (NCSI) [1, 2]. The discussions from the workshop are captured as topic areas covered in this report's sections. The report

  16. Auditory and motor imagery modulate learning in music performance

    Directory of Open Access Journals (Sweden)

    Rachel M. Brown

    2013-07-01

    Full Text Available Skilled performers such as athletes or musicians can improve their performance by imagining the actions or sensory outcomes associated with their skill. Performers vary widely in their auditory and motor imagery abilities, and these individual differences influence sensorimotor learning. It is unknown whether imagery abilities influence both memory encoding and retrieval. We examined how auditory and motor imagery abilities influence musicians’ encoding (during Learning, as they practiced novel melodies, and retrieval (during Recall of those melodies. Pianists learned melodies by listening without performing (auditory learning or performing without sound (motor learning; following Learning, pianists performed the melodies from memory with auditory feedback (Recall. During either Learning (Experiment 1 or Recall (Experiment 2, pianists experienced either auditory interference, motor interference, or no interference. Pitch accuracy (percentage of correct pitches produced and temporal regularity (variability of quarter-note interonset intervals were measured at Recall. Independent tests measured auditory and motor imagery skills. Pianists’ pitch accuracy was higher following auditory learning than following motor learning and lower in motor interference conditions (Experiments 1 and 2. Both auditory and motor imagery skills improved pitch accuracy overall. Auditory imagery skills modulated pitch accuracy encoding (Experiment 1: Higher auditory imagery skill corresponded to higher pitch accuracy following auditory learning with auditory or motor interference, and following motor learning with motor or no interference. These findings suggest that auditory imagery abilities decrease vulnerability to interference and compensate for missing auditory feedback at encoding. Auditory imagery skills also influenced temporal regularity at retrieval (Experiment 2: Higher auditory imagery skill predicted greater temporal regularity during Recall in the

  17. Auditory-Perceptual Learning Improves Speech Motor Adaptation in Children

    OpenAIRE

    Shiller, Douglas M.; Rochon, Marie-Lyne

    2014-01-01

    Auditory feedback plays an important role in children’s speech development by providing the child with information about speech outcomes that is used to learn and fine-tune speech motor plans. The use of auditory feedback in speech motor learning has been extensively studied in adults by examining oral motor responses to manipulations of auditory feedback during speech production. Children are also capable of adapting speech motor patterns to perceived changes in auditory feedback, however it...

  18. A virtual auditory environment for investigating the auditory signal processing of realistic sounds

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel

    A loudspeaker-based virtual auditory environment (VAE) has been developed to provide a realistic versatile research environment for investigating the auditory signal processing in real environments, i.e., considering multiple sound sources and room reverberation. The VAE allows a full control...... of the acoustic scenario in order to systematically study the auditory processing of reverberant sounds. It is based on the ODEON software, which is state-of-the-art software for room acoustic simulations developed at Acoustic Technology, DTU. First, a MATLAB interface to the ODEON software has been developed...

  19. Auditory Perception of Statistically Blurred Sound Textures

    DEFF Research Database (Denmark)

    McWalter, Richard Ian; MacDonald, Ewen; Dau, Torsten

    Sound textures have been identified as a category of sounds which are processed by the peripheral auditory system and captured with running timeaveraged statistics. Although sound textures are temporally homogeneous, they offer a listener with enough information to identify and differentiate...... sources. This experiment investigated the ability of the auditory system to identify statistically blurred sound textures and the perceptual relationship between sound textures. Identification performance of statistically blurred sound textures presented at a fixed blur increased over those presented...... as a gradual blur. The results suggests that the correct identification of sound textures is influenced by the preceding blurred stimulus. These findings draw parallels to the recognition of blurred images....

  20. Auditory processing in autism spectrum disorder

    DEFF Research Database (Denmark)

    Vlaskamp, Chantal; Oranje, Bob; Madsen, Gitte Falcher

    2017-01-01

    Children with autism spectrum disorders (ASD) often show changes in (automatic) auditory processing. Electrophysiology provides a method to study auditory processing, by investigating event-related potentials such as mismatch negativity (MMN) and P3a-amplitude. However, findings on MMN in autism...... a hyper-responsivity at the attentional level. In addition, as similar MMN deficits are found in schizophrenia, these MMN results may explain some of the frequently reported increased risk of children with ASD to develop schizophrenia later in life. Autism Res 2017, 10: 1857–1865....

  1. The many facets of auditory display

    Science.gov (United States)

    Blattner, Meera M.

    1995-01-01

    In this presentation we will examine some of the ways sound can be used in a virtual world. We make the case that many different types of audio experience are available to us. A full range of audio experiences include: music, speech, real-world sounds, auditory displays, and auditory cues or messages. The technology of recreating real-world sounds through physical modeling has advanced in the past few years allowing better simulation of virtual worlds. Three-dimensional audio has further enriched our sensory experiences.

  2. Auditory brain-stem responses in adrenomyeloneuropathy.

    Science.gov (United States)

    Grimes, A M; Elks, M L; Grunberger, G; Pikus, A M

    1983-09-01

    We studied three patients with adrenomyeloneuropathy. Complete audiologic assessment was obtained: two patients showed unimpaired peripheral hearing and one showed a mild high-frequency hearing loss. Auditory brain-stem responses were abnormal in both ears of all subjects, with one subject showing no response above wave I, and the other two having significant wave I to III and wave III to V interval prolongations. We concluded that auditory brain-stem response testing provides a simple, valid, reliable method for demonstrating neurologic abnormality in adrenomyeloneuropathy even prior to evidence of clinical signs.

  3. Rhythmic walking interaction with auditory feedback

    DEFF Research Database (Denmark)

    Maculewicz, Justyna; Jylhä, Antti; Serafin, Stefania

    2015-01-01

    We present an interactive auditory display for walking with sinusoidal tones or ecological, physically-based synthetic walking sounds. The feedback is either step-based or rhythmic, with constant or adaptive tempo. In a tempo-following experiment, we investigate different interaction modes...... and auditory feedback, based on the MSE between the target and performed tempo, and the stability of the latter. The results indicate that the MSE with ecological sounds is comparable to that with the sinusoidal tones, yet ecological sounds are considered more natural. Adaptive conditions result in stable...

  4. Decoding auditory attention to instruments in polyphonic music using single-trial EEG classification

    Science.gov (United States)

    Treder, M. S.; Purwins, H.; Miklody, D.; Sturm, I.; Blankertz, B.

    2014-04-01

    Objective. Polyphonic music (music consisting of several instruments playing in parallel) is an intuitive way of embedding multiple information streams. The different instruments in a musical piece form concurrent information streams that seamlessly integrate into a coherent and hedonistically appealing entity. Here, we explore polyphonic music as a novel stimulation approach for use in a brain-computer interface. Approach. In a multi-streamed oddball experiment, we had participants shift selective attention to one out of three different instruments in music audio clips. Each instrument formed an oddball stream with its own specific standard stimuli (a repetitive musical pattern) and oddballs (deviating musical pattern). Main results. Contrasting attended versus unattended instruments, ERP analysis shows subject- and instrument-specific responses including P300 and early auditory components. The attended instrument can be classified offline with a mean accuracy of 91% across 11 participants. Significance. This is a proof of concept that attention paid to a particular instrument in polyphonic music can be inferred from ongoing EEG, a finding that is potentially relevant for both brain-computer interface and music research.

  5. Stimulus Pauses and Perturbations Differentially Delay or Promote the Segregation of Auditory Objects: Psychoacoustics and Modeling

    Directory of Open Access Journals (Sweden)

    James Rankin

    2017-04-01

    Full Text Available Segregating distinct sound sources is fundamental for auditory perception, as in the cocktail party problem. In a process called the build-up of stream segregation, distinct sound sources that are perceptually integrated initially can be segregated into separate streams after several seconds. Previous research concluded that abrupt changes in the incoming sounds during build-up—for example, a step change in location, loudness or timing—reset the percept to integrated. Following this reset, the multisecond build-up process begins again. Neurophysiological recordings in auditory cortex (A1 show fast (subsecond adaptation, but unified mechanistic explanations for the bias toward integration, multisecond build-up and resets remain elusive. Combining psychoacoustics and modeling, we show that initial unadapted A1 responses bias integration, that the slowness of build-up arises naturally from competition downstream, and that recovery of adaptation can explain resets. An early bias toward integrated perceptual interpretations arising from primary cortical stages that encode low-level features and feed into competition downstream could also explain similar phenomena in vision. Further, we report a previously overlooked class of perturbations that promote segregation rather than integration. Our results challenge current understanding for perturbation effects on the emergence of sound source segregation, leading to a new hypothesis for differential processing downstream of A1. Transient perturbations can momentarily redirect A1 responses as input to downstream competition units that favor segregation.

  6. Stimulus Pauses and Perturbations Differentially Delay or Promote the Segregation of Auditory Objects: Psychoacoustics and Modeling.

    Science.gov (United States)

    Rankin, James; Osborn Popp, Pamela J; Rinzel, John

    2017-01-01

    Segregating distinct sound sources is fundamental for auditory perception, as in the cocktail party problem. In a process called the build-up of stream segregation, distinct sound sources that are perceptually integrated initially can be segregated into separate streams after several seconds. Previous research concluded that abrupt changes in the incoming sounds during build-up-for example, a step change in location, loudness or timing-reset the percept to integrated. Following this reset, the multisecond build-up process begins again. Neurophysiological recordings in auditory cortex (A1) show fast (subsecond) adaptation, but unified mechanistic explanations for the bias toward integration, multisecond build-up and resets remain elusive. Combining psychoacoustics and modeling, we show that initial unadapted A1 responses bias integration, that the slowness of build-up arises naturally from competition downstream, and that recovery of adaptation can explain resets. An early bias toward integrated perceptual interpretations arising from primary cortical stages that encode low-level features and feed into competition downstream could also explain similar phenomena in vision. Further, we report a previously overlooked class of perturbations that promote segregation rather than integration. Our results challenge current understanding for perturbation effects on the emergence of sound source segregation, leading to a new hypothesis for differential processing downstream of A1. Transient perturbations can momentarily redirect A1 responses as input to downstream competition units that favor segregation.

  7. Evaluation of anomalies during nickel and titanium silicide formation using the effective heat of formation mode

    CSIR Research Space (South Africa)

    Pretorius, R

    1993-11-01

    Full Text Available stream_source_info pretorius_1993.pdf.txt stream_content_type text/plain stream_size 39978 Content-Encoding ISO-8859-1 stream_name pretorius_1993.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Materials Chemistry... and Physics, 36 (1993) 31-38 31 Evaluation of anomalies during nickel and titanium silicide formation using the effective heat of formation model R. Pretorius, C.C. Theron IonSolid Znteraction Division, Van de H.A. Ras and T...

  8. Should Children with Auditory Processing Disorders Receive Services in Schools?

    Science.gov (United States)

    Lucker, Jay R.

    2012-01-01

    Many children with problems learning in school can have educational deficits due to underlying auditory processing disorders (APD). For these children, they can be identified as having auditory learning disabilities. Furthermore, auditory learning disabilities is identified as a specific learning disability (SLD) in the IDEA. Educators and…

  9. 21 CFR 874.1090 - Auditory impedance tester.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Auditory impedance tester. 874.1090 Section 874...) MEDICAL DEVICES EAR, NOSE, AND THROAT DEVICES Diagnostic Devices § 874.1090 Auditory impedance tester. (a) Identification. An auditory impedance tester is a device that is intended to change the air pressure in the...

  10. Auditory Temporal Processing as a Specific Deficit among Dyslexic Readers

    Science.gov (United States)

    Fostick, Leah; Bar-El, Sharona; Ram-Tsur, Ronit

    2012-01-01

    The present study focuses on examining the hypothesis that auditory temporal perception deficit is a basic cause for reading disabilities among dyslexics. This hypothesis maintains that reading impairment is caused by a fundamental perceptual deficit in processing rapid auditory or visual stimuli. Since the auditory perception involves a number of…

  11. 76 FR 61655 - Definition of Part 15 Auditory Assistance Device

    Science.gov (United States)

    2011-10-05

    ... COMMISSION 47 CFR Part 15 Definition of Part 15 Auditory Assistance Device AGENCY: Federal Communications Commission. ACTION: Proposed rule. SUMMARY: This document proposes to amend the definition of ``auditory... definition restricts the use of part 15 auditory assistance devices that operate in the 72.0-73.0 MHz, 74.6...

  12. The California stream quality assessment

    Science.gov (United States)

    Van Metre, Peter C.; Egler, Amanda L.; May, Jason T.

    2017-03-06

    In 2017, the U.S. Geological Survey (USGS) National Water-Quality Assessment (NAWQA) project is assessing stream quality in coastal California, United States. The USGS California Stream Quality Assessment (CSQA) will sample streams over most of the Central California Foothills and Coastal Mountains ecoregion (modified from Griffith and others, 2016), where rapid urban growth and intensive agriculture in the larger river valleys are raising concerns that stream health is being degraded. Findings will provide the public and policy-makers with information regarding which human and natural factors are the most critical in affecting stream quality and, thus, provide insights about possible approaches to protect the health of streams in the region.

  13. Acquiring auditory and phonetic categories

    NARCIS (Netherlands)

    Goudbeek, M.B.; Smits, R.; Swingley, D.; Cutler, A.

    2005-01-01

    Infants' first steps in language acquisition involve learning the relevant contrasts of the language-specific phonemic repertoire. This learning is viewed as the formation of categories in a multidimensional psychophysical space. Categorization research in the visual modality has shown that adults

  14. The LHCb Turbo Stream

    CERN Document Server

    Benson, Sean; Vesterinen, Mika Anton; Williams, John Michael

    2015-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction and discarding the raw event. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses, and this will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissi...

  15. The LHCb Turbo Stream

    CERN Document Server

    Benson, Sean

    2015-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the "turbo stream" the trigger will write out a compact summary of "physics" objects containing all information necessary for analyses, and this will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during...

  16. The LHCb Turbo stream

    CERN Document Server

    AUTHOR|(CDS)2070171

    2016-01-01

    The LHCb experiment will record an unprecedented dataset of beauty and charm hadron decays during Run II of the LHC, set to take place between 2015 and 2018. A key computing challenge is to store and process this data, which limits the maximum output rate of the LHCb trigger. So far, LHCb has written out a few kHz of events containing the full raw sub-detector data, which are passed through a full offline event reconstruction before being considered for physics analysis. Charm physics in particular is limited by trigger output rate constraints. A new streaming strategy includes the possibility to perform the physics analysis with candidates reconstructed in the trigger, thus bypassing the offline reconstruction. In the Turbo stream the trigger will write out a compact summary of physics objects containing all information necessary for analyses. This will allow an increased output rate and thus higher average efficiencies and smaller selection biases. This idea will be commissioned and developed during 2015 wi...

  17. Efficient coding of spectrotemporal binaural sounds leads to emergence of the auditory space representation

    OpenAIRE

    Wiktor eMlynarski

    2013-01-01

    To date a number of studies have shown that receptive field shapes of early sensory neurons can be reproduced by optimizing coding efficiency of natural stimulus ensembles. A still unresolved question is whether the efficientcoding hypothesis explains formation of neurons which explicitly represent environmental features of different functional importance. This paper proposes that the spatial selectivity of higher auditory neurons emerges as a direct consequence of learning efficient codes fo...

  18. Moving Forward: A Feminist Analysis of Mobile Music Streaming

    Directory of Open Access Journals (Sweden)

    Ann Werner

    2015-06-01

    Full Text Available The importance of understanding gender, space and mobility as co-constructed in public space has been emphasized by feminist researchers (Massey 2005, Hanson 2010. And within feminist theory materiality, affect and emotions has been described as central for experienced subjectivity (Ahmed 2012. Music listening while moving through public space has previously been studied as a way of creating a private auditory bubble for the individual (Bull 2000, Cahir and Werner 2013 and in this article feminist theory on emotion (Ahmed 2010 and space (Massey 2005 is employed in order to understand mobile music streaming. More specifically it discusses what can happen when mobile media technology is used to listen to music in public space and investigates interconnectedness of bodies, music, technology and space. The article is based on autoethnographic material of mobile music streaming in public and concludes that a forward movement shaped by happiness is a desired result of mobile music streaming. The valuing of "forward" is critically examined from the point of feminist theory and the failed music listening moments are also discussed in terms of emotion and space.

  19. Exploring combinations of auditory and visual stimuli for gaze-independent brain-computer interfaces.

    Directory of Open Access Journals (Sweden)

    Xingwei An

    Full Text Available For Brain-Computer Interface (BCI systems that are designed for users with severe impairments of the oculomotor system, an appropriate mode of presenting stimuli to the user is crucial. To investigate whether multi-sensory integration can be exploited in the gaze-independent event-related potentials (ERP speller and to enhance BCI performance, we designed a visual-auditory speller. We investigate the possibility to enhance stimulus presentation by combining visual and auditory stimuli within gaze-independent spellers. In this study with N = 15 healthy users, two different ways of combining the two sensory modalities are proposed: simultaneous redundant streams (Combined-Speller and interleaved independent streams (Parallel-Speller. Unimodal stimuli were applied as control conditions. The workload, ERP components, classification accuracy and resulting spelling speed were analyzed for each condition. The Combined-speller showed a lower workload than uni-modal paradigms, without the sacrifice of spelling performance. Besides, shorter latencies, lower amplitudes, as well as a shift of the temporal and spatial distribution of discriminative information were observed for Combined-speller. These results are important and are inspirations for future studies to search the reason for these differences. For the more innovative and demanding Parallel-Speller, where the auditory and visual domains are independent from each other, a proof of concept was obtained: fifteen users could spell online with a mean accuracy of 87.7% (chance level <3% showing a competitive average speed of 1.65 symbols per minute. The fact that it requires only one selection period per symbol makes it a good candidate for a fast communication channel. It brings a new insight into the true multisensory stimuli paradigms. Novel approaches for combining two sensory modalities were designed here, which are valuable for the development of ERP-based BCI paradigms.

  20. Attentional influences on functional mapping of speech sounds in human auditory cortex

    Directory of Open Access Journals (Sweden)

    Elbert Thomas

    2004-07-01

    Full Text Available Abstract Background The speech signal contains both information about phonological features such as place of articulation and non-phonological features such as speaker identity. These are different aspects of the 'what'-processing stream (speaker vs. speech content, and here we show that they can be further segregated as they may occur in parallel but within different neural substrates. Subjects listened to two different vowels, each spoken by two different speakers. During one block, they were asked to identify a given vowel irrespectively of the speaker (phonological categorization, while during the other block the speaker had to be identified irrespectively of the vowel (speaker categorization. Auditory evoked fields were recorded using 148-channel magnetoencephalography (MEG, and magnetic source imaging was obtained for 17 subjects. Results During phonological categorization, a vowel-dependent difference of N100m source location perpendicular to the main tonotopic gradient replicated previous findings. In speaker categorization, the relative mapping of vowels remained unchanged but sources were shifted towards more posterior and more superior locations. Conclusions These results imply that the N100m reflects the extraction of abstract invariants from the speech signal. This part of the processing is accomplished in auditory areas anterior to AI, which are part of the auditory 'what' system. This network seems to include spatially separable modules for identifying the phonological information and for associating it with a particular speaker that are activated in synchrony but within different regions, suggesting that the 'what' processing can be more adequately modeled by a stream of parallel stages. The relative activation of the parallel processing stages can be modulated by attentional or task demands.

  1. A test battery measuring auditory capabilities of listening panels

    DEFF Research Database (Denmark)

    Ghani, Jody; Ellermeier, Wolfgang; Zimmer, Karin

    2005-01-01

    a battery of tests covering a larger range of auditory capabilities in order to assess individual listeners. The format of all tests is kept as 'objective' as possible by using a three-alternative forced-choice paradigm in which the subject must choose which of the sound samples is different, thus keeping...... the instruction to the subjects simple and common for all tests. Both basic (e.g. frequency discrimination) and complex (e.g. profile analysis) psychoacoustic tests are covered in the battery and a threshold of discrimination or detection is obtained for each test. Data were collected on 24 listeners who had been...... recruited for participation in an expert listening panel for evaluating the sound quality of hi-fi audio systems. The test battery data were related to the actual performance of the listeners when judging the degradation in quality produced by audio codecs....

  2. Stream salamanders as indicators of stream quality in Maryland, USA

    Science.gov (United States)

    Southerland, M.T.; Jung, R.E.; Baxter, D.P.; Chellman, I.C.; Mercurio, G.; Volstad, J.H.

    2004-01-01

    Biological indicators are critical to the protection of small, headwater streams and the ecological values they provide. Maryland and other state monitoring programs have determined that fish indicators are ineffective in small streams, where stream salamanders may replace fish as top predators. Because of their life history, physiology, abundance, and ubiquity, stream salamanders are likely representative of biological integrity in these streams. The goal of this study was to determine whether stream salamanders are effective indicators of ecological conditions across biogeographic regions and gradients of human disturbance. During the summers of 2001 and 2002, we intensively surveyed for stream salamanders at 76 stream sites located west of the Maryland Coastal Plain, sites also monitored by the Maryland Biological Stream Survey (MBSS) and City of Gaithersburg. We found 1,584 stream salamanders, including all eight species known in Maryland, using two 15 ? 2 m transects and two 4 m2 quadrats that spanned both stream bank and channel. We performed removal sampling on transects to estimate salamander species detection probabilities, which ranged from 0.67-0.85. Stepwise regressions identified 15 of 52 non-salamander variables, representing water quality, physical habitat, land use, and biological conditions, which best predicted salamander metrics. Indicator development involved (1) identifying reference (non-degraded) and degraded sites (using percent forest, shading, riparian buffer width, aesthetic rating, and benthic macroinvertebrate and fish indices of biotic integrity); (2) testing 12 candidate salamander metrics (representing species richness and composition, abundance, species tolerance, and reproductive function) for their ability to distinguish reference from degraded sites; and (3) combining metrics into an index that effectively discriminated sites according to known stream conditions. Final indices for Highlands, Piedmont, and Non-Coastal Plain

  3. Data streams algorithms and applications

    CERN Document Server

    Muthukrishnan, S

    2014-01-01

    Data stream algorithms as an active research agenda emerged only over the past few years, even though the concept of making few passes over the data for performing computations has been around since the early days of Automata Theory. The data stream agenda now pervades many branches of Computer Science including databases, networking, knowledge discovery and data mining, and hardware systems. Industry is in synch too, with Data Stream Management Systems (DSMSs) and special hardware to deal with data speeds. Even beyond Computer Science, data stream concerns are emerging in physics, atmospheric

  4. Auditory and visual refractory period effects in children and adults: an ERP study.

    Science.gov (United States)

    Coch, Donna; Skendzel, Wendy; Neville, Helen J

    2005-09-01

    This developmental study was designed to investigate event-related potential (ERP) refractory period effects in the auditory and visual modalities in children and adults and to correlate these electrophysiological measures with standard behavioral measures. ERPs, accuracy, and reaction time were recorded as school-age children and adults monitored a stream of repetitive standard stimuli and detected occasional targets. Standards were presented at various interstimulus intervals (ISIs) in order to measure refractory period effects on early sensory components. As has been reported previously in adults, larger components for standards with longer ISIs were observed for an auditory N1 and the visual occipital P1 and P2 in adults. Remarkably similar effects were observed in children. However, only children showed refractory effects on the amplitude of the visual N1 and P2 measured at anterior sites. Across groups, behavioral accuracy and reaction time were correlated with latencies of auditory N1 and visual P2 across ISI conditions. The results establish a normal course of development for auditory and visual ERP refractory period effects across the 6- to 8-year-old age range and indicate similar refractoriness in the neural systems indexed by ERPs in these paradigms in typically developing children and adults. Further, the results suggest that electrophysiological measures and standard behavioral measures may at least in part index similar processing in the present paradigms. These findings provide a foundation for further investigation into atypical development, particularly in those populations for which processing time deficits have been implicated such as children with specific language impairment or dyslexia.

  5. Stable individual characteristics in the perception of multiple embedded patterns in multistable auditory stimuli

    Directory of Open Access Journals (Sweden)

    Susan eDenham

    2014-02-01

    Full Text Available The ability of the auditory system to parse complex scenes into component objects in order to extract information from the environment is very robust, yet the processing principles underlying this ability are still not well understood. This study was designed to investigate the proposal that the auditory system constructs multiple interpretations of the acoustic scene in parallel, based on the finding that when listening to a long repetitive sequence listeners report switching between different perceptual organizations. Using the ‘ABA-’ auditory streaming paradigm we trained listeners until they could reliably recognise all possible embedded patterns of length four which could in principle be extracted from the sequence, and in a series of test sessions investigated their spontaneous reports of those patterns. With the training allowing them to identify and mark a wider variety of possible patterns, participants spontaneously reported many more patterns than the ones traditionally assumed (Integrated vs. Segregated. Despite receiving consistent training and despite the apparent randomness of perceptual switching, we found individual switching patterns were idiosyncratic; i.e. the perceptual switching patterns of each participant were more similar to their own switching patterns in different sessions than to those of other participants. These individual differences were found to be preserved even between test sessions held a year after the initial experiment. Our results support the idea that the auditory system attempts to extract an exhaustive set of embedded patterns which can be used to generate expectations of future events and which by competing for dominance give rise to (changing perceptual awareness, with the characteristics of pattern discovery and perceptual competition having a strong idiosyncratic component. Perceptual multistability thus provides a means for characterizing both general mechanisms and individual differences in

  6. Auditory-motor integration during fast repetition: the neuronal correlates of shadowing.

    Science.gov (United States)

    Peschke, C; Ziegler, W; Kappes, J; Baumgaertner, A

    2009-08-01

    This fMRI study examined which structures of a proposed dorsal stream system are involved in the auditory-motor integration during fast overt repetition. We used a shadowing task which requires immediate repetition of an auditory-verbal input and is supposed to elicit unconscious imitation effects of phonologically irrelevant speech parameters. Subjects' responses were recorded in the scanner. To examine automated auditory-motor mapping of speech gestures of others onto one's own speech production system we contrasted the shadowing of pseudowords produced by multiple speakers (men, women, and children) with the shadowing of pseudowords produced by a single speaker. Furthermore, we asked whether behavioral variables predicted changes in functional activation during shadowing. Shadowing multiple speakers compared to a single speaker elicited increased bilateral activation predominantly in the superior temporal sulci. These regions may mediate acoustic-phonetic speaker normalization in preparation of a translation of perceptual into motor information. Additional activation in Broca's area and the thalamus may reflect motor effects of the adaptation to multiple speaker models. Item-wise correlational analyses of response latencies with BOLD signal changes indicated that longer latencies were associated with increased activation in the left parietal operculum, suggesting that this area plays a central role in the actual transfer of auditory-verbal information to speech motor representations. A multiple regression of behavioral with imaging data showed activation in a right inferior parietal area near the temporo-parietal boundary which correlated positively with the degree of speech rate imitation and negatively with response latency. This activation may be attributable to attentional and/or paralinguistic processes.

  7. Dynamic Correlations between Intrinsic Connectivity and Extrinsic Connectivity of the Auditory Cortex in Humans

    Directory of Open Access Journals (Sweden)

    Zhuang Cui

    2017-08-01

    Full Text Available The arrival of sound signals in the auditory cortex (AC triggers both local and inter-regional signal propagations over time up to hundreds of milliseconds and builds up both intrinsic functional connectivity (iFC and extrinsic functional connectivity (eFC of the AC. However, interactions between iFC and eFC are largely unknown. Using intracranial stereo-electroencephalographic recordings in people with drug-refractory epilepsy, this study mainly investigated the temporal dynamic of the relationships between iFC and eFC of the AC. The results showed that a Gaussian wideband-noise burst markedly elicited potentials in both the AC and numerous higher-order cortical regions outside the AC (non-auditory cortices. Granger causality analyses revealed that in the earlier time window, iFC of the AC was positively correlated with both eFC from the AC to the inferior temporal gyrus and that to the inferior parietal lobule. While in later periods, the iFC of the AC was positively correlated with eFC from the precentral gyrus to the AC and that from the insula to the AC. In conclusion, dual-directional interactions occur between iFC and eFC of the AC at different time windows following the sound stimulation and may form the foundation underlying various central auditory processes, including auditory sensory memory, object formation, integrations between sensory, perceptional, attentional, motor, emotional, and executive processes.

  8. Translational control of auditory imprinting and structural plasticity by eIF2α.

    Science.gov (United States)

    Batista, Gervasio; Johnson, Jennifer Leigh; Dominguez, Elena; Costa-Mattioli, Mauro; Pena, Jose L

    2016-12-23

    The formation of imprinted memories during a critical period is crucial for vital behaviors, including filial attachment. Yet, little is known about the underlying molecular mechanisms. Using a combination of behavior, pharmacology, in vivo surface sensing of translation (SUnSET) and DiOlistic labeling we found that, translational control by the eukaryotic translation initiation factor 2 alpha (eIF2α) bidirectionally regulates auditory but not visual imprinting and related changes in structural plasticity in chickens. Increasing phosphorylation of eIF2α (p-eIF2α) reduces translation rates and spine plasticity, and selectively impairs auditory imprinting. By contrast, inhibition of an eIF2α kinase or blocking the translational program controlled by p-eIF2α enhances auditory imprinting. Importantly, these manipulations are able to reopen the critical period. Thus, we have identified a translational control mechanism that selectively underlies auditory imprinting. Restoring translational control of eIF2α holds the promise to rejuvenate adult brain plasticity and restore learning and memory in a variety of cognitive disorders.

  9. Altered auditory BOLD response to conspecific birdsong in zebra finches with stuttered syllables.

    Directory of Open Access Journals (Sweden)

    Henning U Voss

    2010-12-01

    Full Text Available How well a songbird learns a song appears to depend on the formation of a robust auditory template of its tutor's song. Using functional magnetic resonance neuroimaging we examine auditory responses in two groups of zebra finches that differ in the type of song they sing after being tutored by birds producing stuttering-like syllable repetitions in their songs. We find that birds that learn to produce the stuttered syntax show attenuated blood oxygenation level-dependent (BOLD responses to tutor's song, and more pronounced responses to conspecific song primarily in the auditory area field L of the avian forebrain, when compared to birds that produce normal song. These findings are consistent with the presence of a sensory song template critical for song learning in auditory areas of the zebra finch forebrain. In addition, they suggest a relationship between an altered response related to familiarity and/or saliency of song stimuli and the production of variant songs with stuttered syllables.

  10. Right hemispheric contributions to fine auditory temporal discriminations: high-density electrical mapping of the duration mismatch negativity (MMN

    Directory of Open Access Journals (Sweden)

    Pierfilippo De Sanctis

    2009-04-01

    Full Text Available That language processing is primarily a function of the left hemisphere has led to the supposition that auditory temporal discrimination is particularly well-tuned in the left hemisphere, since speech discrimination is thought to rely heavily on the registration of temporal transitions. However, physiological data have not consistently supported this view. Rather, functional imaging studies often show equally strong, if not stronger, contributions from the right hemisphere during temporal processing tasks, suggesting a more complex underlying neural substrate. The mismatch negativity (MMN component of the human auditory evoked-potential (AEP provides a sensitive metric of duration processing in human auditory cortex and lateralization of MMN can be readily assayed when sufficiently dense electrode arrays are employed. Here, the sensitivity of the left and right auditory cortex for temporal processing was measured by recording the MMN to small duration deviants presented to either the left or right ear. We found that duration deviants differing by just 15% (i.e. rare 115 ms tones presented in a stream of 100 ms tones elicited a significant MMN for tones presented to the left ear (biasing the right hemisphere. However, deviants presented to the right ear elicited no detectable MMN for this separation. Further, participants detected significantly more duration deviants and committed fewer false alarms for tones presented to the left ear during a subsequent psychophysical testing session. In contrast to the prevalent model, these results point to equivalent if not greater right hemisphere contributions to temporal processing of small duration changes.

  11. Relation between Streaming Potential and Streaming Electrification Generated by Streaming of Water through a Sandwich-type Cell

    OpenAIRE

    Maruyama, Kazunori; Nikaido, Mitsuru; Hara, Yoshinori; Tanizaki, Yoshie

    2012-01-01

    Both streaming potential and accumulated charge of water flowed out were measured simultaneously using a sandwich-type cell. The voltages generated in divided sections along flow direction satisfied additivity. The sign of streaming potential agreed with that of streaming electrification. The relation between streaming potential and streaming electrification was explained from a viewpoint of electrical double layer in glass-water interface.

  12. Analyzing indicators of stream health for Minnesota streams

    Science.gov (United States)

    Singh, U.; Kocian, M.; Wilson, B.; Bolton, A.; Nieber, J.; Vondracek, B.; Perry, J.; Magner, J.

    2005-01-01

    Recent research has emphasized the importance of using physical, chemical, and biological indicators of stream health for diagnosing impaired watersheds and their receiving water bodies. A multidisciplinary team at the University of Minnesota is carrying out research to develop a stream classification system for Total Maximum Daily Load (TMDL) assessment. Funding for this research is provided by the United States Environmental Protection Agency and the Minnesota Pollution Control Agency. One objective of the research study involves investigating the relationships between indicators of stream health and localized stream characteristics. Measured data from Minnesota streams collected by various government and non-government agencies and research institutions have been obtained for the research study. Innovative Geographic Information Systems tools developed by the Environmental Science Research Institute and the University of Texas are being utilized to combine and organize the data. Simple linear relationships between index of biological integrity (IBI) and channel slope, two-year stream flow, and drainage area are presented for the Redwood River and the Snake River Basins. Results suggest that more rigorous techniques are needed to successfully capture trends in IBI scores. Additional analyses will be done using multiple regression, principal component analysis, and clustering techniques. Uncovering key independent variables and understanding how they fit together to influence stream health are critical in the development of a stream classification for TMDL assessment.

  13. Streaming movies, media, and instant access

    CERN Document Server

    Dixon, Wheeler Winston

    2013-01-01

    Film stocks are vanishing, but the iconic images of the silver screen remain -- albeit in new, sleeker formats. Today, viewers can instantly stream movies on televisions, computers, and smartphones. Gone are the days when films could only be seen in theaters or rented at video stores: movies are now accessible at the click of a button, and there are no reels, tapes, or discs to store. Any film or show worth keeping may be collected in the virtual cloud and accessed at will through services like Netflix, Hulu, and Amazon Instant.The movies have changed, and we are changing with them.

  14. Neurodynamics, tonality, and the auditory brainstem response.

    Science.gov (United States)

    Large, Edward W; Almonte, Felix V

    2012-04-01

    Tonal relationships are foundational in music, providing the basis upon which musical structures, such as melodies, are constructed and perceived. A recent dynamic theory of musical tonality predicts that networks of auditory neurons resonate nonlinearly to musical stimuli. Nonlinear resonance leads to stability and attraction relationships among neural frequencies, and these neural dynamics give rise to the perception of relationships among tones that we collectively refer to as tonal cognition. Because this model describes the dynamics of neural populations, it makes specific predictions about human auditory neurophysiology. Here, we show how predictions about the auditory brainstem response (ABR) are derived from the model. To illustrate, we derive a prediction about population responses to musical intervals that has been observed in the human brainstem. Our modeled ABR shows qualitative agreement with important features of the human ABR. This provides a source of evidence that fundamental principles of auditory neurodynamics might underlie the perception of tonal relationships, and forces reevaluation of the role of learning and enculturation in tonal cognition. © 2012 New York Academy of Sciences.

  15. Cancer of the external auditory canal

    DEFF Research Database (Denmark)

    Nyrop, Mette; Grøntved, Aksel

    2002-01-01

    OBJECTIVE: To evaluate the outcome of surgery for cancer of the external auditory canal and relate this to the Pittsburgh staging system used both on squamous cell carcinoma and non-squamous cell carcinoma. DESIGN: Retrospective case series of all patients who had surgery between 1979 and 2000. M...

  16. Diagnosing Dyslexia: The Screening of Auditory Laterality.

    Science.gov (United States)

    Johansen, Kjeld

    A study investigated whether a correlation exists between the degree and nature of left-brain laterality and specific reading and spelling difficulties. Subjects, 50 normal readers and 50 reading disabled persons native to the island of Bornholm, had their auditory laterality screened using pure-tone audiometry and dichotic listening. Results…

  17. Neural Entrainment to Auditory Imagery of Rhythms

    Directory of Open Access Journals (Sweden)

    Haruki Okawa

    2017-10-01

    Full Text Available A method of reconstructing perceived or imagined music by analyzing brain activity has not yet been established. As a first step toward developing such a method, we aimed to reconstruct the imagery of rhythm, which is one element of music. It has been reported that a periodic electroencephalogram (EEG response is elicited while a human imagines a binary or ternary meter on a musical beat. However, it is not clear whether or not brain activity synchronizes with fully imagined beat and meter without auditory stimuli. To investigate neural entrainment to imagined rhythm during auditory imagery of beat and meter, we recorded EEG while nine participants (eight males and one female imagined three types of rhythm without auditory stimuli but with visual timing, and then we analyzed the amplitude spectra of the EEG. We also recorded EEG while the participants only gazed at the visual timing as a control condition to confirm the visual effect. Furthermore, we derived features of the EEG using canonical correlation analysis (CCA and conducted an experiment to individually classify the three types of imagined rhythm from the EEG. The results showed that classification accuracies exceeded the chance level in all participants. These results suggest that auditory imagery of meter elicits a periodic EEG response that changes at the imagined beat and meter frequency even in the fully imagined conditions. This study represents the first step toward the realization of a method for reconstructing the imagined music from brain activity.

  18. Affective priming with auditory speech stimuli

    NARCIS (Netherlands)

    Degner, J.

    2011-01-01

    Four experiments explored the applicability of auditory stimulus presentation in affective priming tasks. In Experiment 1, it was found that standard affective priming effects occur when prime and target words are presented simultaneously via headphones similar to a dichotic listening procedure. In

  19. Auditory risk estimates for youth target shooting.

    Science.gov (United States)

    Meinke, Deanna K; Murphy, William J; Finan, Donald S; Lankford, James E; Flamme, Gregory A; Stewart, Michael; Soendergaard, Jacob; Jerome, Trevor W

    2014-03-01

    To characterize the impulse noise exposure and auditory risk for youth recreational firearm users engaged in outdoor target shooting events. The youth shooting positions are typically standing or sitting at a table, which places the firearm closer to the ground or reflective surface when compared to adult shooters. Acoustic characteristics were examined and the auditory risk estimates were evaluated using contemporary damage-risk criteria for unprotected adult listeners and the 120-dB peak limit suggested by the World Health Organization (1999) for children. Impulses were generated by 26 firearm/ammunition configurations representing rifles, shotguns, and pistols used by youth. Measurements were obtained relative to a youth shooter's left ear. All firearms generated peak levels that exceeded the 120 dB peak limit suggested by the WHO for children. In general, shooting from the seated position over a tabletop increases the peak levels, LAeq8 and reduces the unprotected maximum permissible exposures (MPEs) for both rifles and pistols. Pistols pose the greatest auditory risk when fired over a tabletop. Youth should utilize smaller caliber weapons, preferably from the standing position, and always wear hearing protection whenever engaging in shooting activities to reduce the risk for auditory damage.

  20. Auditory Neuropathy Spectrum Disorder: A Review

    Science.gov (United States)

    Norrix, Linda W.; Velenovsky, David S.

    2014-01-01

    Purpose: Auditory neuropathy spectrum disorder, or ANSD, can be a confusing diagnosis to physicians, clinicians, those diagnosed, and parents of children diagnosed with the condition. The purpose of this review is to provide the reader with an understanding of the disorder, the limitations in current tools to determine site(s) of lesion, and…

  1. Self-affirmation in auditory persuasion

    NARCIS (Netherlands)

    Elbert, Sarah; Dijkstra, Arie

    Persuasive health information can be presented through an auditory channel. Curiously enough, the effect of voice cues in health persuasion has hardly been studied. Research concerning visual persuasive messages showed that self-affirmation results in a more open-minded reaction to threatening

  2. Lateralization of auditory-cortex functions.

    Science.gov (United States)

    Tervaniemi, Mari; Hugdahl, Kenneth

    2003-12-01

    In the present review, we summarize the most recent findings and current views about the structural and functional basis of human brain lateralization in the auditory modality. Main emphasis is given to hemodynamic and electromagnetic data of healthy adult participants with regard to music- vs. speech-sound encoding. Moreover, a selective set of behavioral dichotic-listening (DL) results and clinical findings (e.g., schizophrenia, dyslexia) are included. It is shown that human brain has a strong predisposition to process speech sounds in the left and music sounds in the right auditory cortex in the temporal lobe. Up to great extent, an auditory area located at the posterior end of the temporal lobe (called planum temporale [PT]) underlies this functional asymmetry. However, the predisposition is not bound to informational sound content but to rapid temporal information more common in speech than in music sounds. Finally, we obtain evidence for the vulnerability of the functional specialization of sound processing. These altered forms of lateralization may be caused by top-down and bottom-up effects inter- and intraindividually In other words, relatively small changes in acoustic sound features or in their familiarity may modify the degree in which the left vs. right auditory areas contribute to sound encoding.

  3. [Auditory processing in specific language disorder].

    Science.gov (United States)

    Idiazábal-Aletxa, M A; Saperas-Rodríguez, M

    2008-01-01

    Specific language impairment (SLI) is diagnosed when a child has difficulty in producing or understanding spoken language for no apparent reason. The diagnosis in made when language development is out of keeping with other aspects of development, and possible explanatory causes have been excluded. During the last years neurosciences have approached to the study of SLI. The ability to process two or more rapidly presented, successive, auditory stimuli is believed to underlie successful language acquisition. It has been proposed that SLI is the consequence of low-level abnormalities in auditory perception. Too, children with SLI show a specific deficit in automatic discrimination of syllables. Electrophysiological methods may reveal underlying immaturity or other abnormality of auditory processing even when behavioural thresholds look normal. There is much controversy about the role of such deficits in causing their language problems, and it has been difficult to establish solid, replicable findings in this area because of the heterogeneity in the population and because insufficient attention has been paid to maturational aspects of auditory processing.

  4. Auditory confrontation naming in Alzheimer's disease.

    Science.gov (United States)

    Brandt, Jason; Bakker, Arnold; Maroof, David Aaron

    2010-11-01

    Naming is a fundamental aspect of language and is virtually always assessed with visual confrontation tests. Tests of the ability to name objects by their characteristic sounds would be particularly useful in the assessment of visually impaired patients, and may be particularly sensitive in Alzheimer's disease (AD). We developed an auditory naming task, requiring the identification of the source of environmental sounds (i.e., animal calls, musical instruments, vehicles) and multiple-choice recognition of those not identified. In two separate studies mild-to-moderate AD patients performed more poorly than cognitively normal elderly on the auditory naming task. This task was also more difficult than two versions of a comparable visual naming task, and correlated more highly with Mini-Mental State Exam score. Internal consistency reliability was acceptable, although ROC analysis revealed auditory naming to be slightly less successful than visual confrontation naming in discriminating AD patients from normal participants. Nonetheless, our auditory naming task may prove useful in research and clinical practice, especially with visually impaired patients.

  5. The Goldilocks Effect in Infant Auditory Attention

    Science.gov (United States)

    Kidd, Celeste; Piantadosi, Steven T.; Aslin, Richard N.

    2014-01-01

    Infants must learn about many cognitive domains (e.g., language, music) from auditory statistics, yet capacity limits on their cognitive resources restrict the quantity that they can encode. Previous research has established that infants can attend to only a subset of available acoustic input. Yet few previous studies have directly examined infant…

  6. Auditory-motor coupling affects phonetic encoding.

    Science.gov (United States)

    Schmidt-Kassow, Maren; Thöne, Katharina; Kaiser, Jochen

    2017-11-27

    Recent studies have shown that moving in synchrony with auditory stimuli boosts attention allocation and verbal learning. Furthermore rhythmic tones are processed more efficiently than temporally random tones ('timing effect'), and this effect is increased when participants actively synchronize their motor performance with the rhythm of the tones, resulting in auditory-motor synchronization. Here, we investigated whether this applies also to sequences of linguistic stimuli (syllables). We compared temporally irregular syllable sequences with two temporally regular conditions where either the interval between syllable onsets (stimulus onset asynchrony, SOA) or the interval between the syllables' vowel onsets was kept constant. Entrainment to the stimulus presentation frequency (1 Hz) and event-related potentials were assessed in 24 adults who were instructed to detect pre-defined deviant syllables while they either pedaled or sat still on a stationary exercise bike. We found larger 1 Hz entrainment and P300 amplitudes for the SOA presentation during motor activity. Furthermore, the magnitude of the P300 component correlated with the motor variability in the SOA condition and 1 Hz entrainment, while in turn 1 Hz entrainment correlated with auditory-motor synchronization performance. These findings demonstrate that acute auditory-motor coupling facilitates phonetic encoding. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. fMRI of the auditory system: understanding the neural basis of auditory gestalt.

    Science.gov (United States)

    Di Salle, Francesco; Esposito, Fabrizio; Scarabino, Tommaso; Formisano, Elia; Marciano, Elio; Saulino, Claudio; Cirillo, Sossio; Elefante, Raffaele; Scheffler, Klaus; Seifritz, Erich

    2003-12-01

    Functional magnetic resonance imaging (fMRI) has rapidly become the most widely used imaging method for studying brain functions in humans. This is a result of its extreme flexibility of use and of the astonishingly detailed spatial and temporal information it provides. Nevertheless, until very recently, the study of the auditory system has progressed at a considerably slower pace compared to other functional systems. Several factors have limited fMRI research in the auditory field, including some intrinsic features of auditory functional anatomy and some peculiar interactions between fMRI technique and audition. A well known difficulty arises from the high intensity acoustic noise produced by gradient switching in echo-planar imaging (EPI), as well as in other fMRI sequences more similar to conventional MR sequences. The acoustic noise interacts in an unpredictable way with the experimental stimuli both from a perceptual point of view and in the evoked hemodynamics. To overcome this problem, different approaches have been proposed recently that generally require careful tailoring of the experimental design and the fMRI methodology to the specific requirements posed by the auditory research. The novel methodological approaches can make the fMRI exploration of auditory processing much easier and more reliable, and thus may permit filling the gap with other fields of neuroscience research. As a result, some fundamental neural underpinnings of audition are being clarified, and the way sound stimuli are integrated in the auditory gestalt are beginning to be understood.

  8. Biological impact of auditory expertise across the life span: musicians as a model of auditory learning

    Science.gov (United States)

    Strait, Dana L.; Kraus, Nina

    2013-01-01

    Experience-dependent characteristics of auditory function, especially with regard to speech-evoked auditory neurophysiology, have garnered increasing attention in recent years. This interest stems from both pragmatic and theoretical concerns as it bears implications for the prevention and remediation of language-based learning impairment in addition to providing insight into mechanisms engendering experience-dependent changes in human sensory function. Musicians provide an attractive model for studying the experience-dependency of auditory processing in humans due to their distinctive neural enhancements compared to nonmusicians. We have only recently begun to address whether these enhancements are observable early in life, during the initial years of music training when the auditory system is under rapid development, as well as later in life, after the onset of the aging process. Here we review neural enhancements in musically trained individuals across the life span in the context of cellular mechanisms that underlie learning, identified in animal models. Musicians’ subcortical physiologic enhancements are interpreted according to a cognitive framework for auditory learning, providing a model by which to study mechanisms of experience-dependent changes in auditory function in humans. PMID:23988583

  9. Auditory pathology in cri-du-chat (5p-) syndrome: phenotypic evidence for auditory neuropathy.

    Science.gov (United States)

    Swanepoel, D

    2007-10-01

    5p-(cri-du-chat syndrome) is a well-defined clinical entity presenting with phenotypic and cytogenetic variability. Despite recognition that abnormalities in audition are common, limited reports on auditory functioning in affected individuals are available. The current study presents a case illustrating the auditory functioning in a 22-month-old patient diagnosed with 5p- syndrome, karyotype 46,XX,del(5)(p13). Auditory neuropathy was diagnosed based on abnormal auditory evoked potentials with neural components suggesting severe to profound hearing loss in the presence of cochlear microphonic responses and behavioral reactions to sound at mild to moderate hearing levels. The current case and a review of available reports indicate that auditory neuropathy or neural dys-synchrony may be another phenotype of the condition possibly related to abnormal expression of the protein beta-catenin mapped to 5p. Implications are for routine and diagnostic specific assessments of auditory functioning and for employment of non-verbal communication methods in early intervention.

  10. Sound stream segregation: a neuromorphic approach to solve the "cocktail party problem" in real-time.

    Science.gov (United States)

    Thakur, Chetan Singh; Wang, Runchun M; Afshar, Saeed; Hamilton, Tara J; Tapson, Jonathan C; Shamma, Shihab A; van Schaik, André

    2015-01-01

    The human auditory system has the ability to segregate complex auditory scenes into a foreground component and a background, allowing us to listen to specific speech sounds from a mixture of sounds. Selective attention plays a crucial role in this process, colloquially known as the "cocktail party effect." It has not been possible to build a machine that can emulate this human ability in real-time. Here, we have developed a framework for the implementation of a neuromorphic sound segregation algorithm in a Field Programmable Gate Array (FPGA). This algorithm is based on the principles of temporal coherence and uses an attention signal to separate a target sound stream from background noise. Temporal coherence implies that auditory features belonging to the same sound source are coherently modulated and evoke highly correlated neural response patterns. The basis for this form of sound segregation is that responses from pairs of channels that are strongly positively correlated belong to the same stream, while channels that are uncorrelated or anti-correlated belong to different streams. In our framework, we have used a neuromorphic cochlea as a frontend sound analyser to extract spatial information of the sound input, which then passes through band pass filters that extract the sound envelope at various modulation rates. Further stages include feature extraction and mask generation, which is finally used to reconstruct the targeted sound. Using sample tonal and speech mixtures, we show that our FPGA architecture is able to segregate sound sources in real-time. The accuracy of segregation is indicated by the high signal-to-noise ratio (SNR) of the segregated stream (90, 77, and 55 dB for simple tone, complex tone, and speech, respectively) as compared to the SNR of the mixture waveform (0 dB). This system may be easily extended for the segregation of complex speech signals, and may thus find various applications in electronic devices such as for sound segregation and

  11. Content congruency and its interplay with temporal synchrony modulate integration between rhythmic audiovisual streams

    Directory of Open Access Journals (Sweden)

    Yi-Huang eSu

    2014-12-01

    Full Text Available Both lower-level stimulus factors (e.g., temporal proximity and higher-level cognitive factors (e.g., content congruency are known to influence multisensory integration. The former can direct attention in a converging manner, and the latter can indicate whether information from the two modalities belongs together. The present research investigated whether and how these two factors interacted in the perception of rhythmic, audiovisual streams derived from a human movement scenario. Congruency here was based on sensorimotor correspondence pertaining to rhythm perception. Participants attended to bimodal stimuli consisting of a humanlike figure moving regularly to a sequence of auditory beat, and detected a possible auditory temporal deviant. The figure moved either downwards (congruently or upwards (incongruently to the downbeat, while in both situations the movement was either synchronous with the beat, or lagging behind it. Greater cross-modal binding was expected to hinder deviant detection. Results revealed poorer detection for congruent than for incongruent streams, suggesting stronger integration in the former. False alarms increased in asynchronous stimuli only for congruent streams, indicating greater tendency for deviant report due to visual capture of asynchronous auditory events. In addition, a greater increase in perceived synchrony was associated with a greater reduction in false alarms for congruent streams, while the pattern was reversed for incongruent ones. These results demonstrate that content congruency as a top-down factor not only promotes integration, but also modulates bottom-up effects of synchrony. Results are also discussed regarding how theories of integration and attentional entrainment may be combined in the context of rhythmic multisensory stimuli.

  12. Functional properties of human auditory cortical fields

    Directory of Open Access Journals (Sweden)

    David L Woods

    2010-12-01

    Full Text Available While auditory cortex in non-human primates has been subdivided into multiple functionally-specialized auditory cortical fields (ACFs, the boundaries and functional specialization of human ACFs have not been defined. In the current study, we evaluated whether a widely accepted primate model of auditory cortex could explain regional tuning properties of fMRI activations on the cortical surface to attended and nonattended tones of different frequency, location, and intensity. The limits of auditory cortex were defined by voxels that showed significant activations to nonattended sounds. Three centrally-located fields with mirror-symmetric tonotopic organization were identified and assigned to the three core fields of the primate model while surrounding activations were assigned to belt fields following procedures similar to those used in macaque fMRI studies. The functional properties of core, medial belt, and lateral belt field groups were then analyzed. Field groups were distinguished by tonotopic organization, frequency selectivity, intensity sensitivity, contralaterality, binaural enhancement, attentional modulation, and hemispheric asymmetry. In general, core fields showed greater sensitivity to sound properties than did belt fields, while belt fields showed greater attentional modulation than core fields. Significant distinctions in intensity sensitivity and contralaterality were seen between adjacent core fields A1 and R, while multiple differences in tuning properties were evident at boundaries between adjacent core and belt fields. The reliable differences in functional properties between fields and field groups suggest that the basic primate pattern of auditory cortex organization is preserved in humans. A comparison of the sizes of functionally-defined ACFs in humans and macaques reveals a significant relative expansion in human lateral belt fields implicated in the processing of speech.

  13. Articulatory movements modulate auditory responses to speech.

    Science.gov (United States)

    Agnew, Z K; McGettigan, C; Banks, B; Scott, S K

    2013-06-01

    Production of actions is highly dependent on concurrent sensory information. In speech production, for example, movement of the articulators is guided by both auditory and somatosensory input. It has been demonstrated in non-human primates that self-produced vocalizations and those of others are differentially processed in the temporal cortex. The aim of the current study was to investigate how auditory and motor responses differ for self-produced and externally produced speech. Using functional neuroimaging, subjects were asked to produce sentences aloud, to silently mouth while listening to a different speaker producing the same sentence, to passively listen to sentences being read aloud, or to read sentences silently. We show that that separate regions of the superior temporal cortex display distinct response profiles to speaking aloud, mouthing while listening, and passive listening. Responses in anterior superior temporal cortices in both hemispheres are greater for passive listening compared with both mouthing while listening, and speaking aloud. This is the first demonstration that articulation, whether or not it has auditory consequences, modulates responses of the dorsolateral temporal cortex. In contrast posterior regions of the superior temporal cortex are recruited during both articulation conditions. In dorsal regions of the posterior superior temporal gyrus, responses to mouthing and reading aloud were equivalent, and in more ventral posterior superior temporal sulcus, responses were greater for reading aloud compared with mouthing while listening. These data demonstrate an anterior-posterior division of superior temporal regions where anterior fields are suppressed during motor output, potentially for the purpose of enhanced detection of the speech of others. We suggest posterior fields are engaged in auditory processing for the guidance of articulation by auditory information. Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Behaviour of streams in angle and frequency spaces in different potentials

    NARCIS (Netherlands)

    Buist, Hans J. T.; Helmi, Amina

    We have studied the behaviour of stellar streams in the Aquarius fully cosmological N-body simulations of the formation of Milky Way haloes. In particular, we have characterised the streams in angle and frequency spaces derived using an approximate but generally well-fitting spherical potential. We

  15. Characterization of auditory synaptic inputs to gerbil perirhinal cortex

    Directory of Open Access Journals (Sweden)

    Vibhakar C Kotak

    2015-08-01

    Full Text Available The representation of acoustic cues involves regions downstream from the auditory cortex (ACx. One such area, the perirhinal cortex (PRh, processes sensory signals containing mnemonic information. Therefore, our goal was to assess whether PRh receives auditory inputs from the auditory thalamus (MG and ACx in an auditory thalamocortical brain slice preparation and characterize these afferent-driven synaptic properties. When the MG or ACx was electrically stimulated, synaptic responses were recorded from the PRh neurons. Blockade of GABA-A receptors dramatically increased the amplitude of evoked excitatory potentials. Stimulation of the MG or ACx also evoked calcium transients in most PRh neurons. Separately, when fluoro ruby was injected in ACx in vivo, anterogradely labeled axons and terminals were observed in the PRh. Collectively, these data show that the PRh integrates auditory information from the MG and ACx and that auditory driven inhibition dominates the postsynaptic responses in a non-sensory cortical region downstream from the auditory cortex.

  16. Multisensory Interactions between Auditory and Haptic Object Recognition

    DEFF Research Database (Denmark)

    Kassuba, Tanja; Menz, Mareike M; R�der, Brigitte

    2013-01-01

    they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory...... and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG......, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent...

  17. What Can Hierarchies Do for Data Streams?

    DEFF Research Database (Denmark)

    Yin, Xuepeng; Pedersen, Torben Bach

    Much effort has been put into building data streams management systems for querying data streams. Here, data streams have been viewed as a flow of low-level data items, e.g., sensor readings or IP packet data. Stream query languages have mostly been SQL-based, with the STREAM and Telegraph...

  18. We All Stream for Video

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    More than ever, teachers are using digital video to enhance their lessons. In fact, the number of schools using video streaming increased from 30 percent to 45 percent between 2004 and 2006, according to Market Data Retrieval. Why the popularity? For starters, video-streaming products are easy to use. They allow teachers to punctuate lessons with…

  19. Save Our Streams and Waterways.

    Science.gov (United States)

    Indiana State Dept. of Education, Indianapolis. Center for School Improvement and Performance.

    Protection of existing water supplies is critical to ensuring good health for people and animals alike. This program is aligned with the Izaak Walton League of American's Save Our Streams program which is based on the concept that students can greatly improve the quality of a nearby stream, pond, or river by regular visits and monitoring. The…

  20. Pilot-Streaming: Design Considerations for a Stream Processing Framework for High-Performance Computing

    OpenAIRE

    Andre Luckow; Peter Kasson; Shantenu Jha

    2016-01-01

    This White Paper (submitted to STREAM 2016) identifies an approach to integrate streaming data with HPC resources. The paper outlines the design of Pilot-Streaming, which extends the concept of Pilot-abstraction to streaming real-time data.

  1. STREAM: A First Programming Process

    DEFF Research Database (Denmark)

    Caspersen, Michael Edelgaard; Kölling, Michael

    2009-01-01

    to derive a programming process, STREAM, designed specifically for novices. STREAM is a carefully down-scaled version of a full and rich agile software engineering process particularly suited for novices learning object-oriented programming. In using it we hope to achieve two things: to help novice...... programmers learn faster and better while at the same time laying the foundation for a more thorough treatment of more advanced aspects of software engineering. In this article, two examples demonstrate the application of STREAM. The STREAM process has been taught in the introductory programming courses...... at our universities for the past three years and the results are very encouraging. We report on a small, preliminary study evaluating the learning outcome of teaching STREAM. The study indicates a positive effect on the development of students’ process competences....

  2. Auditory Working Memory Load Impairs Visual Ventral Stream Processing: Toward a Unified Model of Attentional Load

    Science.gov (United States)

    Klemen, Jane; Buchel, Christian; Buhler, Mira; Menz, Mareike M.; Rose, Michael

    2010-01-01

    Attentional interference between tasks performed in parallel is known to have strong and often undesired effects. As yet, however, the mechanisms by which interference operates remain elusive. A better knowledge of these processes may facilitate our understanding of the effects of attention on human performance and the debilitating consequences…

  3. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    , we test whether tactile cues can be used to segregate 2 interleaved melodies. Twelve musicians and 12 nonmusicians were asked to detect changes in a 4-note repeated melody interleaved with a random melody. In order to perform this task, the listener must be able to segregate the target melody from...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...

  4. Stream-profile analysis and stream-gradient index

    Science.gov (United States)

    Hack, John T.

    1973-01-01

    The generally regular three-dimensional geometry of drainage networks is the basis for a simple method of terrain analysis providing clues to bedrock conditions and other factors that determine topographic forms. On a reach of any stream, a gradient-index value can be obtained which allows meaningful comparisons of channel slope on streams of different sizes. The index is believed to reflect stream power or competence and is simply the product of the channel slope at a point and channel length measured along the longest stream above the pointwhere the calculation is made. In an adjusted topography, changes in gradient-index values along a stream generally correspond to differences in bedrock or introduced load. In any landscape the gradient index of a stream is related to total relief and stream regimen. Thus, climate, tectonic events, and geomorphic history must be considered in using the gradient index. Gradient-index values can be obtained quickly by simple measurements on topographic maps, or they can be obtained by more sophisticated photogrammetric measurements that involve simple computer calculations from x, y, z coordinates.

  5. Bifurcation of learning and structure formation in neuronal maps

    DEFF Research Database (Denmark)

    Marschler, Christian; Faust-Ellsässer, Carmen; Starke, Jens

    2014-01-01

    to map formation in the laminar nucleus of the barn owl's auditory system. Using equation-free methods, we perform a bifurcation analysis of spatio-temporal structure formation in the associated synaptic-weight matrix. This enables us to analyze learning as a bifurcation process and follow the unstable...

  6. Influence of Gully Erosion Control on Amphibian and Reptile Communities within Riparian Zones of Channelized Streams

    Science.gov (United States)

    Riparian zones of streams in northwestern Mississippi have been impacted by agriculture, channelization, channel incision, and gully erosion. Riparian gully formation has resulted in the fragmentation of remnant riparian zones within agricultural watersheds. One widely used conservation practice for...

  7. The relation between working memory capacity and auditory lateralization in children with auditory processing disorders.

    Science.gov (United States)

    Moossavi, Abdollah; Mehrkian, Saiedeh; Lotfi, Yones; Faghihzadeh, Soghrat; sajedi, Hamed

    2014-11-01

    Auditory processing disorder (APD) describes a complex and heterogeneous disorder characterized by poor speech perception, especially in noisy environments. APD may be responsible for a range of sensory processing deficits associated with learning difficulties. There is no general consensus about the nature of APD and how the disorder should be assessed or managed. This study assessed the effect of cognition abilities (working memory capacity) on sound lateralization in children with auditory processing disorders, in order to determine how "auditory cognition" interacts with APD. The participants in this cross-sectional comparative study were 20 typically developing and 17 children with a diagnosed auditory processing disorder (9-11 years old). Sound lateralization abilities investigated using inter-aural time (ITD) differences and inter-aural intensity (IID) differences with two stimuli (high pass and low pass noise) in nine perceived positions. Working memory capacity was evaluated using the non-word repetition, and forward and backward digits span tasks. Linear regression was employed to measure the degree of association between working memory capacity and localization tests between the two groups. Children in the APD group had consistently lower scores than typically developing subjects in lateralization and working memory capacity measures. The results showed working memory capacity had significantly negative correlation with ITD errors especially with high pass noise stimulus but not with IID errors in APD children. The study highlights the impact of working memory capacity on auditory lateralization. The finding of this research indicates that the extent to which working memory influences auditory processing depend on the type of auditory processing and the nature of stimulus/listening situation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  8. Stochastic undersampling steepens auditory threshold/duration functions: Implications for understanding auditory deafferentation and aging

    Directory of Open Access Journals (Sweden)

    Frederic eMarmel

    2015-05-01

    Full Text Available It has long been known that some listeners experience hearing difficulties out of proportion with their audiometric losses. Notably, some older adults as well as auditory neuropathy patients have temporal-processing and speech-in-noise intelligibility deficits not accountable for by elevated audiometric thresholds. The study of these hearing deficits has been revitalized by recent studies that show that auditory deafferentation comes with aging and can occur even in the absence of an audiometric loss. The present study builds on the stochastic undersampling principle proposed by Lopez-Poveda and Barrios (2013 to account for the perceptual effects of auditory deafferentation. Auditory threshold/duration functions were measured for broadband noises that were stochastically undersampled to various different degrees. Stimuli with and without undersampling were equated for overall energy in order to focus on the changes that undersampling elicited on the stimulus waveforms, and not on its effects on the overall stimulus energy. Stochastic undersampling impaired the detection of short sounds ( 50 ms did not change or improved, depending on the degree of undersampling. The results for short sounds show that stochastic undersampling, and hence presumably deafferentation, can account for the steeper threshold/duration functions observed in auditory neuropathy patients and older adults with (near normal audiometry. This suggests that deafferentation might be diagnosed using pure-tone audiometry with short tones. It further suggests that that the auditory system of audiometrically normal older listeners might not be ‘slower than normal’, as is commonly thought, but simply less well afferented. Finally, the results for both short and long sounds support the probabilistic theories of detectability that challenge the idea that auditory threshold occurs by integration of sound energy over time.

  9. A computer-based auditory sequential pattern test for school-aged children.

    Science.gov (United States)

    Rickard, Natalie A; Smales, Caroline J; Rickard, Kurt L

    2013-05-01

    One type of test commonly used to assess auditory processing disorders (APD) is the Frequency Pattern Test, in which triads of pure tones of two different frequencies are presented, and participants are required to accurately report the sequence of tones, typically using a verbal response. The test is widely used clinically, but in its current format, is an under-exploited means of addressing some candidate processes, such as temporal ordering and frequency discrimination, which might be affected in APD. Here we describe a computer-based version of an auditory pattern perception test, the BirdSong Game, which was designed to be an engaging research tool for use with school-aged children. In this study, 128 children aged 6-10 with normal peripheral hearing were tested. The BirdSong Game application was used to administer auditory sequential pattern tests, via a touch-screen presentation and response interface. A conditioning step was included prior to testing, in order to ensure that participants were able to adequately discriminate between the test tones, and reliably describe the difference using their own vocabulary. Responses were collected either verbally or manually, by having participants press cartoon images on the touch-screen in the appropriate sequence. The data was examined for age, gender and response mode differences. Findings on the auditory tests indicated a significant maturational effect across the age range studied, with no difference between response modes or gender. The BirdSong Game is sensitive to maturational changes in auditory sequencing ability, and the computer-based design of the test has several advantages which make it a potentially useful clinical and research tool. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  10. Major Kansas Perennial Streams : 1961 and 2009

    Data.gov (United States)

    US Fish and Wildlife Service, Department of the Interior — Map of major perennial streams in Kansas for the years 1961 and 2009. The map shows a decrease in streams regarded as perennial in 1961, compared to stream regarded...

  11. Stream Habitat Reach Summary - NCWAP [ds158

    Data.gov (United States)

    California Department of Resources — The Stream Habitat - NCWAP - Reach Summary [ds158] shapefile contains in-stream habitat survey data summarized to the stream reach level. It is a derivative of the...

  12. Electronic Eye: Streaming Video On-Demand.

    Science.gov (United States)

    Meulen, Kathleen

    2002-01-01

    Discusses the use of on-demand streaming video in school libraries. Explains how streaming works, considers advantages and technical issues, and describes products from three companies that are pioneering streaming in the educational video market. (LRW)

  13. Percent Forest Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  14. Percent Agriculture Adjacent to Streams (Future)

    Data.gov (United States)

    U.S. Environmental Protection Agency — The type of vegetation along a stream influences the water quality in the stream. Intact buffer strips of natural vegetation along streams tend to intercept...

  15. Streaming Velocities and the Baryon Acoustic Oscillation Scale.

    Science.gov (United States)

    Blazek, Jonathan A; McEwen, Joseph E; Hirata, Christopher M

    2016-03-25

    At the epoch of decoupling, cosmic baryons had supersonic velocities relative to the dark matter that were coherent on large scales. These velocities subsequently slow the growth of small-scale structure and, via feedback processes, can influence the formation of larger galaxies. We examine the effect of streaming velocities on the galaxy correlation function, including all leading-order contributions for the first time. We find that the impact on the baryon acoustic oscillation (BAO) peak is dramatically enhanced (by a factor of ∼5) over the results of previous investigations, with the primary new effect due to advection: if a galaxy retains memory of the primordial streaming velocity, it does so at its Lagrangian, rather than Eulerian, position. Since correlations in the streaming velocity change rapidly at the BAO scale, this advection term can cause a significant shift in the observed BAO position. If streaming velocities impact tracer density at the 1% level, compared to the linear bias, the recovered BAO scale is shifted by approximately 0.5%. This new effect, which is required to preserve Galilean invariance, greatly increases the importance of including streaming velocities in the analysis of upcoming BAO measurements and opens a new window to the astrophysics of galaxy formation.

  16. How functional coupling between the auditory cortex and the amygdala induces musical emotion: a single case study.

    Science.gov (United States)

    Liégeois-Chauvel, Catherine; Bénar, Christian; Krieg, Julien; Delbé, Charles; Chauvel, Patrick; Giusiano, Bernard; Bigand, Emmanuel

    2014-11-01

    Music is a sound structure of remarkable acoustical and temporal complexity. Although it cannot denote specific meaning, it is one of the most potent and universal stimuli for inducing mood. How the auditory and limbic systems interact, and whether this interaction is lateralized when feeling emotions related to music, remains unclear. We studied the functional correlation between the auditory cortex (AC) and amygdala (AMY) through intracerebral recordings from both hemispheres in a single patient while she listened attentively to musical excerpts, which we compared to passive listening of a sequence of pure tones. While the left primary and secondary auditory cortices (PAC and SAC) showed larger increases in gamma-band responses than the right side, only the right side showed emotion-modulated gamma oscillatory activity. An intra- and inter-hemisphere correlation was observed between the auditory areas and AMY during the delivery of a sequence of pure tones. In contrast, a strikingly right-lateralized functional network between the AC and the AMY was observed to be related to the musical excerpts the patient experienced as happy, sad and peaceful. Interestingly, excerpts experienced as angry, which the patient disliked, were associated with widespread de-correlation between all the structures. These results suggest that the right auditory-limbic interactions result from the formation of oscillatory networks that bind the activities of the network nodes into coherence patterns, resulting in the emergence of a feeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Context-dependent modulation of auditory processing by serotonin

    Science.gov (United States)

    Hurley, L.M.; Hall, I.C.

    2011-01-01

    Context-dependent plasticity in auditory processing is achieved in part by physiological mechanisms that link behavioral state to neural responses to sound. The neuromodulator serotonin has many characteristics suitable for such a role. Serotonergic neurons are extrinsic to the auditory system but send projections to most auditory regions. These projections release serotonin during particular behavioral contexts. Heightened levels of behavioral arousal and specific extrinsic events, including stressful or social events, increase serotonin availability in the auditory system. Although the release of serotonin is likely to be relatively diffuse, highly specific effects of serotonin on auditory neural circuitry are achieved through the localization of serotonergic projections, and through a large array of receptor types that are expressed by specific subsets of auditory neurons. Through this array, serotonin enacts plasticity in auditory processing in multiple ways. Serotonin changes the responses of auditory neurons to input through the alteration of intrinsic and synaptic properties, and alters both short- and long-term forms of plasticity. The infrastructure of the serotonergic system itself is also plastic, responding to age and cochlear trauma. These diverse findings support a view of serotonin as a widespread mechanism for behaviorally relevant plasticity in the regulation of auditory processing. This view also accommodates models of how the same regulatory mechanism can have pathological consequences for auditory processing. PMID:21187135

  18. A Survey of auditory display in image-guided interventions.

    Science.gov (United States)

    Black, David; Hansen, Christian; Nabavi, Arya; Kikinis, Ron; Hahn, Horst

    2017-03-08

    This article investigates the current state of the art of the use of auditory display in image-guided medical interventions. Auditory display is a means of conveying information using sound, and we review the use of this approach to support navigated interventions. We discuss the benefits and drawbacks of published systems and outline directions for future investigation. We undertook a review of scientific articles on the topic of auditory rendering in image-guided intervention. This includes methods for avoidance of risk structures and instrument placement and manipulation. The review did not include auditory display for status monitoring, for instance in anesthesia. We identified 15 publications in the course of the search. Most of the literature (60%) investigates the use of auditory display to convey distance of a tracked instrument to an object using proximity or safety margins. The remainder discuss continuous guidance for navigated instrument placement. Four of the articles present clinical evaluations, 11 present laboratory evaluations, and 3 present informal evaluation (2 present both laboratory and clinical evaluations). Auditory display is a growing field that has been largely neglected in research in image-guided intervention. Despite benefits of auditory displays reported in both the reviewed literature and non-medical fields, adoption in medicine has been slow. Future challenges include increasing interdisciplinary cooperation with auditory display investigators to develop more meaningful auditory display designs and comprehensive evaluations which target the benefits and drawbacks of auditory display in image guidance.

  19. Streaming patterns in Faraday waves

    CERN Document Server

    Périnet, Nicolas; Urra, Héctor; Mujica, Nicolás; Gordillo, Leonardo

    2016-01-01

    Waves patterns in the Faraday instability have been studied for decades. Besides the rich dynamics that can be observed on the waves at the interface, Faraday waves hide beneath them an elusive range of flow patterns --or streaming patterns-- which have not been studied in detail until now. The streaming patterns are responsible for a net circulation in the flow which are reminiscent of convection cells. In this article, we analyse these streaming flows by conducting experiments in a Faraday-wave setup. To visualize the flows, tracers are used to generate both trajectory maps and to probe the streaming velocity field via Particle Image Velocimetry (PIV). We identify three types of patterns and experimentally show that identical Faraday waves can mask streaming patterns that are qualitatively very different. Next we propose a three-dimensional model that explains streaming flows in quasi-inviscid fluids. We show that the streaming inside the fluid arises from a complex coupling between the bulk and the boundar...

  20. Spring 5 & reactive streams

    CERN Document Server

    CERN. Geneva; Clozel, Brian

    2017-01-01

    Spring is a framework widely used by the world-wide Java community, and it is also extensively used at CERN. The accelerator control system is constituted of 10 million lines of Java code, spread across more than 1000 projects (jars) developed by 160 software engineers. Around half of this (all server-side Java code) is based on the Spring framework. Warning: the speakers will assume that people attending the seminar are familiar with Java and Spring’s basic concepts. Spring 5.0 and Spring Boot 2.0 updates (45 min) This talk will cover the big ticket items in the 5.0 release of Spring (including Kotlin support, @Nullable and JDK9) and provide an update on Spring Boot 2.0, which is scheduled for the end of the year. Reactive Spring (1h) Spring Framework 5.0 has been released - and it now supports reactive applications in the Spring ecosystem. During this presentation, we'll talk about the reactive foundations of Spring Framework with the Reactor project and the reactive streams specification. We'll al...

  1. Evolution of a stream ecosystem in recently deglaciated terrain.

    Science.gov (United States)

    Milner, Alexander M; Robertson, Anne L; Brown, Lee E; Sønderland, Svein Harald; McDermott, Michael; Veal, Amanda J

    2011-10-01

    Climate change and associated glacial recession create new stream habitat that leads to the assembly of new riverine communities through primary succession. However, there are still very few studies of the patterns and processes of community assembly during primary succession for stream ecosystems. We illustrate the rapidity with which biotic communities can colonize and establish in recently formed streams by examining Stonefly Creek in Glacier Bay, Alaska (USA), which began to emerge from a remnant glacial ice mass between 1976 and 1979. By 2002, 57 macroinvertebrate and 27 microcrustacea species had become established. Within 10 years of the stream's formation, pink salmon and Dolly Varden charr colonized, followed by other fish species, including juvenile red and silver salmon, Coast Range sculpin, and sticklebacks. Stable-isotope analyses indicate that marine-derived nitrogen from the decay of salmon carcasses was substantially assimilated within the aquatic food web by 2004. The findings from Stonefly Creek are compared with those from a long-term study of a similarly formed but older stream (12 km to the northeast) to examine possible similarities in macroinvertebrate community and biological trait composition between streams at similar stages of development. Macroinvertebrate community assembly appears to have been initially strongly deterministic owing to low water temperature associated with remnant ice masses. In contrast, microcrustacean community assembly appears to have been more stochastic. However, as stream age and water temperature increased, macroinvertebrate colonization was also more stochastic, and taxonomic similarity between Stonefly Creek and a stream at the same stage of development was <50%. However the most abundant taxa were similar, and functional diversity of the two communities was almost identical. Tolerance is suggested as the major mechanism of community assembly. The rapidity with which salmonids and invertebrate communities have

  2. Shaping the aging brain: Role of auditory input patterns in the emergence of auditory cortical impairments

    Directory of Open Access Journals (Sweden)

    Brishna Soraya Kamal

    2013-09-01

    Full Text Available Age-related impairments in the primary auditory cortex (A1 include poor tuning selectivity, neural desynchronization and degraded responses to low-probability sounds. These changes have been largely attributed to reduced inhibition in the aged brain, and are thought to contribute to substantial hearing impairment in both humans and animals. Since many of these changes can be partially reversed with auditory training, it has been speculated that they might not be purely degenerative, but might rather represent negative plastic adjustments to noisy or distorted auditory signals reaching the brain. To test this hypothesis, we examined the impact of exposing young adult rats to 8 weeks of low-grade broadband noise on several aspects of A1 function and structure. We then characterized the same A1 elements in aging rats for comparison. We found that the impact of noise exposure on A1 tuning selectivity, temporal processing of auditory signal and responses to oddball tones was almost indistinguishable from the effect of natural aging. Moreover, noise exposure resulted in a reduction in the population of parvalbumin inhibitory interneurons and cortical myelin as previously documented in the aged group. Most of these changes reversed after returning the rats to a quiet environment. These results support the hypothesis that age-related changes in A1 have a strong activity-dependent component and indicate that the presence or absence of clear auditory input patterns might be a key factor in sustaining adult A1 function.

  3. Longitudinal auditory learning facilitates auditory cognition as revealed by microstate analysis.

    Science.gov (United States)

    Giroud, Nathalie; Lemke, Ulrike; Reich, Philip; Matthes, Katarina L; Meyer, Martin

    2017-02-01

    The current study investigates cognitive processes as reflected in late auditory-evoked potentials as a function of longitudinal auditory learning. A normal hearing adult sample (n=15) performed an active oddball task at three consecutive time points (TPs) arranged at two week intervals, and during which EEG was recorded. The stimuli comprised of syllables consisting of a natural fricative (/sh/,/s/,/f/) embedded between two /a/ sounds, as well as morphed transitions of the two syllables that served as deviants. Perceptual and cognitive modulations as reflected in the onset and the mean global field power (GFP) of N2b- and P3b-related microstates across four weeks were investigated. We found that the onset of P3b-like microstates, but not N2b-like microstates decreased across TPs, more strongly for difficult deviants leading to similar onsets for difficult and easy stimuli after repeated exposure. The mean GFP of all N2b-like and P3b-like microstates increased more in spectrally strong deviants compared to weak deviants, leading to a distinctive activation for each stimulus after learning. Our results indicate that longitudinal training of auditory-related cognitive mechanisms such as stimulus categorization, attention and memory updating processes are an indispensable part of successful auditory learning. This suggests that future studies should focus on the potential benefits of cognitive processes in auditory training. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex

    Science.gov (United States)

    Kostopoulos, Penelope; Petrides, Michael

    2016-01-01

    There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top–down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience. PMID:26831102

  5. Pure word deafness. (Auditory verbal agnosia).

    Science.gov (United States)

    Shoumaker, R D; Ajax, E T; Schenkenberg, T

    1977-04-01

    The selective inability to comprehend the spoken word, in the absence of aphasia or defective or defective hearing, is defined as pure word deafness (auditory verbal agnosia). Reported cases of this rare disorder have suggested the site of involvement to be strategically placed, interrupting fibers from left and right primary auditory receptive areas which project to Wernicke's are in the dominant hemisphere. Our patient is a 44-year-old male who suffered from an uncertain illness complicated by fever, jaundice and generalized seizures seven years previously. Following an apparent convulsion, the patient was noted to be unable to understand spoken language without loss of ability to recognize and respond to sounds or marked impairment of speech or reading. The evidence suggested bilateral cerebral hemisphere disease more marked on the right. The abrupt onset without progression is consistent with a vascular or ischemic etiology. Conclusions about the nature of the lesion and areas involved must await further studies and ultimately tissue examination.

  6. Simulating Auditory Hallucinations in a Video Game

    DEFF Research Database (Denmark)

    Weinel, Jonathan; Cunningham, Stuart

    2017-01-01

    In previous work the authors have proposed the concept of 'ASC Simulations': including audio-visual installations and experiences, as well as interactive video game systems, which simulate altered states of consciousness (ASCs) such as dreams and hallucinations. Building on the discussion...... of the authors' previous paper, where a large-scale qualitative study explored the changes to auditory perception that users of various intoxicating substances report, here the authors present three prototype audio mechanisms for simulating hallucinations in a video game. These were designed in the Unity video...... game engine as an early proof-of-concept. The first mechanism simulates 'selective auditory attention' to different sound sources, by attenuating the amplitude of unattended sources. The second simulates 'enhanced sounds', by adjusting perceived brightness through filtering. The third simulates...

  7. Implicit temporal expectation attenuates auditory attentional blink.

    Directory of Open Access Journals (Sweden)

    Dawei Shen

    Full Text Available Attentional blink (AB describes a phenomenon whereby correct identification of a first target impairs the processing of a second target (i.e., probe nearby in time. Evidence suggests that explicit attention orienting in the time domain can attenuate the AB. Here, we used scalp-recorded, event-related potentials to examine whether auditory AB is also sensitive to implicit temporal attention orienting. Expectations were set up implicitly by varying the probability (i.e., 80% or 20% that the probe would occur at the +2 or +8 position following target presentation. Participants showed a significant AB, which was reduced with the increased probe probability at the +2 position. The probe probability effect was paralleled by an increase in P3b amplitude elicited by the probe. The results suggest that implicit temporal attention orienting can facilitate short-term consolidation of the probe and attenuate auditory AB.

  8. An Auditory Model with Hearing Loss

    DEFF Research Database (Denmark)

    Nielsen, Lars Bramsløw

    An auditory model based on the psychophysics of hearing has been developed and tested. The model simulates the normal ear or an impaired ear with a given hearing loss. Based on reviews of the current literature, the frequency selectivity and loudness growth as functions of threshold and stimulus...... level have been found and implemented in the model. The auditory model was verified against selected results from the literature, and it was confirmed that the normal spread of masking and loudness growth could be simulated in the model. The effects of hearing loss on these parameters was also...... in qualitative agreement with recent findings. The temporal properties of the ear have currently not been included in the model. As an example of a real-world application of the model, loudness spectrograms for a speech utterance were presented. By introducing hearing loss, the speech sounds became less audible...

  9. Binaural processing by the gecko auditory periphery

    DEFF Research Database (Denmark)

    Christensen-Dalsgaard, Jakob; Tang, Ye Zhong; Carr, Catherine E

    2011-01-01

    in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (around 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted...... from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al., 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions......Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion...

  10. Auditory environmental context affects visual distance perception.

    Science.gov (United States)

    Etchemendy, Pablo E; Abregú, Ezequiel; Calcagno, Esteban R; Eguia, Manuel C; Vechiatti, Nilda; Iasi, Federico; Vergara, Ramiro O

    2017-08-03

    In this article, we show that visual distance perception (VDP) is influenced by the auditory environmental context through reverberation-related cues. We performed two VDP experiments in two dark rooms with extremely different reverberation times: an anechoic chamber and a reverberant room. Subjects assigned to the reverberant room perceived the targets farther than subjects assigned to the anechoic chamber. Also, we found a positive correlation between the maximum perceived distance and the auditorily perceived room size. We next performed a second experiment in which the same subjects of Experiment 1 were interchanged between rooms. We found that subjects preserved the responses from the previous experiment provided they were compatible with the present perception of the environment; if not, perceived distance was biased towards the auditorily perceived boundaries of the room. Results of both experiments show that the auditory environment can influence VDP, presumably through reverberation cues related to the perception of room size.

  11. Anatomy and Physiology of the Auditory Tracts

    Directory of Open Access Journals (Sweden)

    Mohammad hosein Hekmat Ara

    1999-03-01

    Full Text Available Hearing is one of the excel sense of human being. Sound waves travel through the medium of air and enter the ear canal and then hit the tympanic membrane. Middle ear transfer almost 60-80% of this mechanical energy to the inner ear by means of “impedance matching”. Then, the sound energy changes to traveling wave and is transferred based on its specific frequency and stimulates organ of corti. Receptors in this organ and their synapses transform mechanical waves to the neural waves and transfer them to the brain. The central nervous system tract of conducting the auditory signals in the auditory cortex will be explained here briefly.

  12. Knowledge discovery from data streams

    CERN Document Server

    Gama, Joao

    2010-01-01

    Since the beginning of the Internet age and the increased use of ubiquitous computing devices, the large volume and continuous flow of distributed data have imposed new constraints on the design of learning algorithms. Exploring how to extract knowledge structures from evolving and time-changing data, Knowledge Discovery from Data Streams presents a coherent overview of state-of-the-art research in learning from data streams.The book covers the fundamentals that are imperative to understanding data streams and describes important applications, such as TCP/IP traffic, GPS data, sensor networks,

  13. Physical fitness modulates incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

    Science.gov (United States)

    Daikoku, Tatsuya; Takahashi, Yuji; Futagami, Hiroko; Tarumoto, Nagayoshi; Yasuda, Hideki

    2017-02-01

    In real-world auditory environments, humans are exposed to overlapping auditory information such as those made by human voices and musical instruments even during routine physical activities such as walking and cycling. The present study investigated how concurrent physical exercise affects performance of incidental and intentional learning of overlapping auditory streams, and whether physical fitness modulates the performances of learning. Participants were grouped with 11 participants with lower and higher fitness each, based on their Vo2max value. They were presented simultaneous auditory sequences with a distinct statistical regularity each other (i.e. statistical learning), while they were pedaling on the bike and seating on a bike at rest. In experiment 1, they were instructed to attend to one of the two sequences and ignore to the other sequence. In experiment 2, they were instructed to attend to both of the two sequences. After exposure to the sequences, learning effects were evaluated by familiarity test. In the experiment 1, performance of statistical learning of ignored sequences during concurrent pedaling could be higher in the participants with high than low physical fitness, whereas in attended sequence, there was no significant difference in performance of statistical learning between high than low physical fitness. Furthermore, there was no significant effect of physical fitness on learning while resting. In the experiment 2, the both participants with high and low physical fitness could perform intentional statistical learning of two simultaneous sequences in the both exercise and rest sessions. The improvement in physical fitness might facilitate incidental but not intentional statistical learning of simultaneous auditory sequences during concurrent physical exercise.

  14. [Non-auditory effects of noise].

    Science.gov (United States)

    Albera, Roberto; Bin, Ilaria; Cena, Manuele; Dagna, Federico; Giordano, Pamela; Sammartano, Azia

    2011-01-01

    Non-auditory effects of noise involve several systems and functions, the most important of which are the cardiovascular, the vestibular and the psychic. Although several studies correlated noise exposure to some pathologies, like hypertension and anxiety disorders, and recent analysis carried out on cavy explained part of their pathophysiology, their multiple causes and the variability of individual reactions are still important limits to their classification.

  15. Sonic morphology: Aesthetic dimensional auditory spatial awareness

    Science.gov (United States)

    Whitehouse, Martha M.

    The sound and ceramic sculpture installation, " Skirting the Edge: Experiences in Sound & Form," is an integration of art and science demonstrating the concept of sonic morphology. "Sonic morphology" is herein defined as aesthetic three-dimensional auditory spatial awareness. The exhibition explicates my empirical phenomenal observations that sound has a three-dimensional form. Composed of ceramic sculptures that allude to different social and physical situations, coupled with sound compositions that enhance and create a three-dimensional auditory and visual aesthetic experience (see accompanying DVD), the exhibition supports the research question, "What is the relationship between sound and form?" Precisely how people aurally experience three-dimensional space involves an integration of spatial properties, auditory perception, individual history, and cultural mores. People also utilize environmental sound events as a guide in social situations and in remembering their personal history, as well as a guide in moving through space. Aesthetically, sound affects the fascination, meaning, and attention one has within a particular space. Sonic morphology brings art forms such as a movie, video, sound composition, and musical performance into the cognitive scope by generating meaning from the link between the visual and auditory senses. This research examined sonic morphology as an extension of musique concrete, sound as object, originating in Pierre Schaeffer's work in the 1940s. Pointing, as John Cage did, to the corporeal three-dimensional experience of "all sound," I composed works that took their total form only through the perceiver-participant's participation in the exhibition. While contemporary artist Alvin Lucier creates artworks that draw attention to making sound visible, "Skirting the Edge" engages the perceiver-participant visually and aurally, leading to recognition of sonic morphology.

  16. Lesions in the external auditory canal

    Directory of Open Access Journals (Sweden)

    Priyank S Chatra

    2011-01-01

    Full Text Available The external auditory canal is an S- shaped osseo-cartilaginous structure that extends from the auricle to the tympanic membrane. Congenital, inflammatory, neoplastic, and traumatic lesions can affect the EAC. High-resolution CT is well suited for the evaluation of the temporal bone, which has a complex anatomy with multiple small structures. In this study, we describe the various lesions affecting the EAC.

  17. Evidence of Reliability and Validity for a Children’s Auditory Continuous Performance Test

    Directory of Open Access Journals (Sweden)

    Michael J. Lasee

    2013-11-01

    Full Text Available Continuous Performance Tests (CPTs are commonly utilized clinical measures of attention and response inhibition. While there have been many studies of CPTs that utilize a visual format, there is considerably less research employing auditory CPTs. The current study provides initial reliability and validity evidence for the Auditory Vigilance Screening Measure (AVSM, a newly developed CPT. Participants included 105 five- to nine-year-old children selected from two rural Midwestern school districts. Reliability data for the AVSM was collected through retesting of 42 participants. Validity was evaluated through correlation of AVSM scales with subscales from the ADHD Rating Scale–IV. Test–retest reliability coefficients ranged from .62 to .74 for AVSM subscales. A significant (r = .31 correlation was obtained between the AVSM Impulsivity Scale and teacher ratings of inattention. Limitations and implications for future study are discussed.

  18. VLT observations of NGC 1097's ``dog-leg'' tidal stream. Dwarf spheroidals and tidal streams

    Science.gov (United States)

    Galianni, P.; Patat, F.; Higdon, J. L.; Mieske, S.; Kroupa, P.

    2010-10-01

    Aims: We investigate the structure and stellar population of two large stellar condensations (knots A & B) along one of the faint optical “jet-like” tidal streams associated with the spiral NGC 1097, with the goal of establishing their physical association with the galaxy and their origin. Methods: We use the VLT/FORS2 to get deep V-band imaging and low-resolution optical spectra of two knots along NGC 1097's northeast “dog-leg” tidal stream. With this data, we explore their morphology and stellar populations. Results: Spectra were obtained for eleven sources in the field surrounding the tidal stream. The great majority of them turned out to be background or foreground sources, but the redshift of knot A (and perhaps of knot B) is consistent with that of NGC 1097. Using the V-band image of the “dog-leg” tidal feature we find that the two knots match the photometric scaling relations of canonical dwarf spheroidal galaxies (dSph) very well. Spectral analysis shows that knot A is mainly composed of stars near G-type, with no signs of ongoing star formation. Comparing its spectrum with a library of high resolution spectra of galactic globular clusters (GCs), we find that the stellar population of this dSph-like object is most similar to intermediate to metal rich galactic GCs. We find moreover, that the tidal stream shows an “S” shaped inflection as well as a pronounced stellar overdensity at knot A's position. This suggests that knot A is being tidally stripped, and populating the stellar stream with its stars. Conclusions: We have discovered that two knots along NGC 1097's northeast tidal stream share most of their spectral and photometric properties with ordinary dwarf spheroidal galaxies (dSph). Moreover, we find strong indications that the “dog-leg” tidal stream arises from the tidal disruption of knot A. Since it has been demonstrated that tidally stripping dSph galaxies need to loose most of their dark matter before starting to loose stars

  19. Simulation of dust streaming in toroidal traps: Stationary flows

    Energy Technology Data Exchange (ETDEWEB)

    Reichstein, Torben; Piel, Alexander [IEAP, Christian-Albrechts-Universitaet, D-24098 Kiel (Germany)

    2011-08-15

    Molecular-dynamic simulations were performed to study dust motion in a toroidal trap under the influence of the ion drag force driven by a Hall motion of the ions in E x B direction, gravity, inter-particle forces, and friction with the neutral gas. This article is focused on the inhomogeneous stationary streaming motion. Depending on the strength of friction, the spontaneous formation of a stationary shock or a spatial bifurcation into a fast flow and a slow vortex flow is observed. In the quiescent streaming region, the particle flow features a shell structure which undergoes a structural phase transition along the flow direction.

  20. Stroke caused auditory attention deficits in children

    Directory of Open Access Journals (Sweden)

    Karla Maria Ibraim da Freiria Elias

    2013-01-01

    Full Text Available OBJECTIVE: To verify the auditory selective attention in children with stroke. METHODS: Dichotic tests of binaural separation (non-verbal and consonant-vowel and binaural integration - digits and Staggered Spondaic Words Test (SSW - were applied in 13 children (7 boys, from 7 to 16 years, with unilateral stroke confirmed by neurological examination and neuroimaging. RESULTS: The attention performance showed significant differences in comparison to the control group in both kinds of tests. In the non-verbal test, identifications the ear opposite the lesion in the free recall stage was diminished and, in the following stages, a difficulty in directing attention was detected. In the consonant- vowel test, a modification in perceptual asymmetry and difficulty in focusing in the attended stages was found. In the digits and SSW tests, ipsilateral, contralateral and bilateral deficits were detected, depending on the characteristics of the lesions and demand of the task. CONCLUSION: Stroke caused auditory attention deficits when dealing with simultaneous sources of auditory information.

  1. BALDEY: A database of auditory lexical decisions.

    Science.gov (United States)

    Ernestus, Mirjam; Cutler, Anne

    2015-01-01

    In an auditory lexical decision experiment, 5541 spoken content words and pseudowords were presented to 20 native speakers of Dutch. The words vary in phonological make-up and in number of syllables and stress pattern, and are further representative of the native Dutch vocabulary in that most are morphologically complex, comprising two stems or one stem plus derivational and inflectional suffixes, with inflections representing both regular and irregular paradigms; the pseudowords were matched in these respects to the real words. The BALDEY ("biggest auditory lexical decision experiment yet") data file includes response times and accuracy rates, with for each item morphological information plus phonological and acoustic information derived from automatic phonemic segmentation of the stimuli. Two initial analyses illustrate how this data set can be used. First, we discuss several measures of the point at which a word has no further neighbours and compare the degree to which each measure predicts our lexical decision response outcomes. Second, we investigate how well four different measures of frequency of occurrence (from written corpora, spoken corpora, subtitles, and frequency ratings by 75 participants) predict the same outcomes. These analyses motivate general conclusions about the auditory lexical decision task. The (publicly available) BALDEY database lends itself to many further analyses.

  2. Mechanisms of auditory verbal hallucination in schizophrenia

    Directory of Open Access Journals (Sweden)

    Raymond eCho

    2013-11-01

    Full Text Available Recent work on the mechanisms underlying auditory verbal hallucination (AVH has been heavily informed by self-monitoring accounts that postulate defects in an internal monitoring mechanism as the basis of AVH. A more neglected alternative is an account focusing on defects in auditory processing, namely a spontaneous activation account of auditory activity underlying AVH. Science is often aided by putting theories in competition. Accordingly, a discussion that systematically contrasts the two models of AVH can generate sharper questions that will lead to new avenues of investigation. In this paper, we provide such a theoretical discussion of the two models, drawing strong contrasts between them. We identify a set of challenges for the self-monitoring account and argue that the spontaneous activation account has much in favor of it and should be the default account. Our theoretical overview leads to new questions and issues regarding the explanation of AVH as a subjective phenomenon and its neural basis. Accordingly, we suggest a set of experimental strategies to dissect the underlying mechanisms of AVH in light of the two competing models.

  3. Hierarchical processing of auditory objects in humans.

    Directory of Open Access Journals (Sweden)

    Sukhbinder Kumar

    2007-06-01

    Full Text Available This work examines the computational architecture used by the brain during the analysis of the spectral envelope of sounds, an important acoustic feature for defining auditory objects. Dynamic causal modelling and Bayesian model selection were used to evaluate a family of 16 network models explaining functional magnetic resonance imaging responses in the right temporal lobe during spectral envelope analysis. The models encode different hypotheses about the effective connectivity between Heschl's Gyrus (HG, containing the primary auditory cortex, planum temporale (PT, and superior temporal sulcus (STS, and the modulation of that coupling during spectral envelope analysis. In particular, we aimed to determine whether information processing during spectral envelope analysis takes place in a serial or parallel fashion. The analysis provides strong support for a serial architecture with connections from HG to PT and from PT to STS and an increase of the HG to PT connection during spectral envelope analysis. The work supports a computational model of auditory object processing, based on the abstraction of spectro-temporal "templates" in the PT before further analysis of the abstracted form in anterior temporal lobe areas.

  4. Binaural processing by the gecko auditory periphery.

    Science.gov (United States)

    Christensen-Dalsgaard, Jakob; Tang, Yezhong; Carr, Catherine E

    2011-05-01

    Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼ 1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200-500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional.

  5. Sensorimotor Learning Enhances Expectations During Auditory Perception.

    Science.gov (United States)

    Mathias, Brian; Palmer, Caroline; Perrin, Fabien; Tillmann, Barbara

    2015-08-01

    Sounds that have been produced with one's own motor system tend to be remembered better than sounds that have only been perceived, suggesting a role of motor information in memory for auditory stimuli. To address potential contributions of the motor network to the recognition of previously produced sounds, we used event-related potential, electric current density, and behavioral measures to investigate memory for produced and perceived melodies. Musicians performed or listened to novel melodies, and then heard the melodies either in their original version or with single pitch alterations. Production learning enhanced subsequent recognition accuracy and increased amplitudes of N200, P300, and N400 responses to pitch alterations. Premotor and supplementary motor regions showed greater current density during the initial detection of alterations in previously produced melodies than in previously perceived melodies, associated with the N200. Primary motor cortex was more strongly engaged by alterations in previously produced melodies within the P300 and N400 timeframes. Motor memory traces may therefore interface with auditory pitch percepts in premotor regions as early as 200 ms following perceived pitch onsets. Outcomes suggest that auditory-motor interactions contribute to memory benefits conferred by production experience, and support a role of motor prediction mechanisms in the production effect. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Auditory Discrimination Learning: Role of Working Memory.

    Directory of Open Access Journals (Sweden)

    Yu-Xuan Zhang

    Full Text Available Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM. First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

  7. Central auditory masking by an illusory tone.

    Science.gov (United States)

    Plack, Christopher J; Oxenham, Andrew J; Kreft, Heather A; Carlyon, Robert P

    2013-01-01

    Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated) between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  8. Central auditory masking by an illusory tone.

    Directory of Open Access Journals (Sweden)

    Christopher J Plack

    Full Text Available Many natural sounds fluctuate over time. The detectability of sounds in a sequence can be reduced by prior stimulation in a process known as forward masking. Forward masking is thought to reflect neural adaptation or neural persistence in the auditory nervous system, but it has been unclear where in the auditory pathway this processing occurs. To address this issue, the present study used a "Huggins pitch" stimulus, the perceptual effects of which depend on central auditory processing. Huggins pitch is an illusory tonal sensation produced when the same noise is presented to the two ears except for a narrow frequency band that is different (decorrelated between the ears. The pitch sensation depends on the combination of the inputs to the two ears, a process that first occurs at the level of the superior olivary complex in the brainstem. Here it is shown that a Huggins pitch stimulus produces more forward masking in the frequency region of the decorrelation than a noise stimulus identical to the Huggins-pitch stimulus except with perfect correlation between the ears. This stimulus has a peripheral neural representation that is identical to that of the Huggins-pitch stimulus. The results show that processing in, or central to, the superior olivary complex can contribute to forward masking in human listeners.

  9. Binaural processing by the gecko auditory periphery

    Science.gov (United States)

    Christensen-Dalsgaard, Jakob; Tang, Yezhong

    2011-01-01

    Lizards have highly directional ears, owing to strong acoustical coupling of the eardrums and almost perfect sound transmission from the contralateral ear. To investigate the neural processing of this remarkable tympanic directionality, we combined biophysical measurements of eardrum motion in the Tokay gecko with neurophysiological recordings from the auditory nerve. Laser vibrometry shows that their ear is a two-input system with approximately unity interaural transmission gain at the peak frequency (∼1.6 kHz). Median interaural delays are 260 μs, almost three times larger than predicted from gecko head size, suggesting interaural transmission may be boosted by resonances in the large, open mouth cavity (Vossen et al. 2010). Auditory nerve recordings are sensitive to both interaural time differences (ITD) and interaural level differences (ILD), reflecting the acoustical interactions of direct and indirect sound components at the eardrum. Best ITD and click delays match interaural transmission delays, with a range of 200–500 μs. Inserting a mold in the mouth cavity blocks ITD and ILD sensitivity. Thus the neural response accurately reflects tympanic directionality, and most neurons in the auditory pathway should be directional. PMID:21325679

  10. Sleep and rest facilitate auditory learning.

    Science.gov (United States)

    Gottselig, J M; Hofer-Tinguely, G; Borbély, A A; Regel, S J; Landolt, H-P; Rétey, J V; Achermann, P

    2004-01-01

    Sleep is superior to waking for promoting performance improvements between sessions of visual perceptual and motor learning tasks. Few studies have investigated possible effects of sleep on auditory learning. A key issue is whether sleep specifically promotes learning, or whether restful waking yields similar benefits. According to the "interference hypothesis," sleep facilitates learning because it prevents interference from ongoing sensory input, learning and other cognitive activities that normally occur during waking. We tested this hypothesis by comparing effects of sleep, busy waking (watching a film) and restful waking (lying in the dark) on auditory tone sequence learning. Consistent with recent findings for human language learning, we found that compared with busy waking, sleep between sessions of auditory tone sequence learning enhanced performance improvements. Restful waking provided similar benefits, as predicted based on the interference hypothesis. These findings indicate that physiological, behavioral and environmental conditions that accompany restful waking are sufficient to facilitate learning and may contribute to the facilitation of learning that occurs during sleep.

  11. Auditory perception of a human walker.

    Science.gov (United States)

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  12. Re-Meandering of Lowland Streams

    DEFF Research Database (Denmark)

    Pedersen, Morten Lauge; Kristensen, Klaus Kevin; Friberg, Nikolai

    2014-01-01

    and macroinvertebrate communities of restored streams would resemble those of natural streams, while those of the channelized streams would differ from both restored and near-natural streams. Physical habitats were surveyed for substrate composition, depth, width and current velocity. Macroinvertebrates were sampled......We evaluated the restoration of physical habitats and its influence on macroinvertebrate community structure in 18 Danish lowland streams comprising six restored streams, six streams with little physical alteration and six channelized streams. We hypothesized that physical habitats...... along 100 m reaches in each stream, in edge habitats and in riffle/run habitats located in the center of the stream. Restoration significantly altered the physical conditions and affected the interactions between stream habitat heterogeneity and macroinvertebrate diversity. The substrate in the restored...

  13. Landau-Kleffner syndrome: epileptic activity in the auditory cortex.

    Science.gov (United States)

    Paetau, R; Kajola, M; Korkman, M; Hämäläinen, M; Granström, M L; Hari, R

    1991-04-01

    The Landau-Kleffner syndrome (LKS) is characterized by electroencephalographic spike discharges and verbal auditory agnosia in previously healthy children. We recorded magnetoencephalographic (MEG) spikes in a patient with LKS, and compared their sources with anatomical information from magnetic resonance imaging. All spikes originated close to the left auditory cortex. The evoked responses were contaminated by spikes in the left auditory area and suppressed in the right--the latter responses recovered when the spikes disappeared. We suggest that unilateral discharges at or near the auditory cortex disrupt auditory discrimination in the affected hemisphere, and lead to suppression of auditory information from the opposite hemisphere, thereby accounting for the two main criteria of LKS.

  14. Acute auditory agnosia as the presenting hearing disorder in MELAS.

    Science.gov (United States)

    Miceli, Gabriele; Conti, Guido; Cianfoni, Alessandro; Di Giacopo, Raffaella; Zampetti, Patrizia; Servidei, Serenella

    2008-12-01

    MELAS is commonly associated with peripheral hearing loss. Auditory agnosia is a rare cortical auditory impairment, usually due to bilateral temporal damage. We document, for the first time, auditory agnosia as the presenting hearing disorder in MELAS. A young woman with MELAS (A3243G mtDNA mutation) suffered from acute cortical hearing damage following a single stroke-like episode, in the absence of previous hearing deficits. Audiometric testing showed marked central hearing impairment and very mild sensorineural hearing loss. MRI documented bilateral, acute lesions to superior temporal regions. Neuropsychological tests demonstrated auditory agnosia without aphasia. Our data and a review of published reports show that cortical auditory disorders are relatively frequent in MELAS, probably due to the strikingly high incidence of bilateral and symmetric damage following stroke-like episodes. Acute auditory agnosia can be the presenting hearing deficit in MELAS and, conversely, MELAS should be suspected in young adults with sudden hearing loss.

  15. [Verbal auditory agnosia: SPECT study of the brain].

    Science.gov (United States)

    Carmona, C; Casado, I; Fernández-Rojas, J; Garín, J; Rayo, J I

    1995-01-01

    Verbal auditory agnosia are rare in clinical practice. Clinically, it characterized by impairment of comprehension and repetition of speech but reading, writing, and spontaneous speech are preserved. So it is distinguished from generalized auditory agnosia by the preserved ability to recognize non verbal sounds. We present the clinical picture of a forty-years-old, right handed woman who developed verbal auditory agnosic after an bilateral temporal ischemic infarcts due to atrial fibrillation by dilated cardiomyopathie. Neurophysiological studies by pure tone threshold audiometry: brainstem auditory evoked potentials and cortical auditory evoked potentials showed sparing of peripheral hearing and intact auditory pathway in brainstem but impaired cortical responses. Cranial CT-SCAN revealed two large hypodenses area involving both cortico-subcortical temporal lobes. Cerebral SPECT using 99mTc-HMPAO as radiotracer showed hypoperfusion just posterior in both frontal lobes nect to Roland's fissure and at level of bitemporal lobes just anterior to Sylvian's fissure.

  16. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  17. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  18. Congruent Visual Speech Enhances Cortical Entrainment to Continuous Auditory Speech in Noise-Free Conditions.

    Science.gov (United States)

    Crosse, Michael J; Butler, John S; Lalor, Edmund C

    2015-10-21

    Congruent audiovisual speech enhances our ability to comprehend a speaker, even in noise-free conditions. When incongruent auditory and visual information is presented concurrently, it can hinder a listener's perception and even cause him or her to perceive information that was not presented in either modality. Efforts to investigate the neural basis of these effects have often focused on the special case of discrete audiovisual syllables that are spatially and temporally congruent, with less work done on the case of natural, continuous speech. Recent electrophysiological studies have demonstrated that cortical response measures to continuous auditory speech can be easily obtained using multivariate analysis methods. Here, we apply such methods to the case of audiovisual speech and, importantly, present a novel framework for indexing multisensory integration in the context of continuous speech. Specifically, we examine how the temporal and contextual congruency of ongoing audiovisual speech affects the cortical encoding of the speech envelope in humans using electroencephalography. We demonstrate that the cortical representation of the speech envelope is enhanced by the presentation of congruent audiovisual speech in noise-free conditions. Furthermore, we show that this is likely attributable to the contribution of neural generators that are not particularly active during unimodal stimulation and that it is most prominent at the temporal scale corresponding to syllabic rate (2-6 Hz). Finally, our data suggest that neural entrainment to the speech envelope is inhibited when the auditory and visual streams are incongruent both temporally and contextually. Seeing a speaker's face as he or she talks can greatly help in understanding what the speaker is saying. This is because the speaker's facial movements relay information about what the speaker is saying, but also, importantly, when the speaker is saying it. Studying how the brain uses this timing relationship to

  19. Auditory cortical processing in real-world listening: the auditory system going real.

    Science.gov (United States)

    Nelken, Israel; Bizley, Jennifer; Shamma, Shihab A; Wang, Xiaoqin

    2014-11-12

    The auditory sense of humans transforms intrinsically senseless pressure waveforms into spectacularly rich perceptual phenomena: the music of Bach or the Beatles, the poetry of Li Bai or Omar Khayyam, or more prosaically the sense of the world filled with objects emitting sounds that is so important for those of us lucky enough to have hearing. Whereas the early representations of sounds in the auditory system are based on their physical structure, higher auditory centers are thought to represent sounds in terms of their perceptual attributes. In this symposium, we will illustrate the current research into this process, using four case studies. We will illustrate how the spectral and temporal properties of sounds are used to bind together, segregate, categorize, and interpret sound patterns on their way to acquire meaning, with important lessons to other sensory systems as well. Copyright © 2014 the authors 0270-6474/14/3415135-04$15.00/0.

  20. Cognitive factors shape brain networks for auditory skills: spotlight on auditory working memory.

    Science.gov (United States)

    Kraus, Nina; Strait, Dana L; Parbery-Clark, Alexandra

    2012-04-01

    Musicians benefit from real-life advantages, such as a greater ability to hear speech in noise and to remember sounds, although the biological mechanisms driving such advantages remain undetermined. Furthermore, the extent to which these advantages are a consequence of musical training or innate characteristics that predispose a given individual to pursue music training is often debated. Here, we examine biological underpinnings of musicians' auditory advantages and the mediating role of auditory working memory. Results from our laboratory are presented within a framework that emphasizes auditory working memory as a major factor in the neural processing of sound. Within this framework, we provide evidence for music training as a contributing source of these abilities. © 2012 New York Academy of Sciences.