WorldWideScience

Sample records for sound localization cue

  1. Development of the sound localization cues in cats

    Science.gov (United States)

    Tollin, Daniel J.

    2004-05-01

    Cats are a common model for developmental studies of the psychophysical and physiological mechanisms of sound localization. Yet, there are few studies on the development of the acoustical cues to location in cats. The magnitude of the three main cues, interaural differences in time (ITDs) and level (ILDs), and monaural spectral shape cues, vary with location in adults. However, the increasing interaural distance associated with a growing head and pinnae during development will result in cues that change continuously until maturation is complete. Here, we report measurements, in cats aged 1 week to adulthood, of the physical dimensions of the head and pinnae and the localization cues, computed from measurements of directional transfer functions. At 1 week, ILD depended little on azimuth for frequencies 10 dB) shift to lower frequencies, and the maximum ITD increases to nearly 370 μs. Changes in the cues are correlated with the increasing size of the head and pinnae. [Work supported by NIDCD DC05122.

  2. Cue Reliability Represented in the Shape of Tuning Curves in the Owl's Sound Localization System.

    Science.gov (United States)

    Cazettes, Fanny; Fischer, Brian J; Peña, Jose L

    2016-02-17

    Optimal use of sensory information requires that the brain estimates the reliability of sensory cues, but the neural correlate of cue reliability relevant for behavior is not well defined. Here, we addressed this issue by examining how the reliability of spatial cue influences neuronal responses and behavior in the owl's auditory system. We show that the firing rate and spatial selectivity changed with cue reliability due to the mechanisms generating the tuning to the sound localization cue. We found that the correlated variability among neurons strongly depended on the shape of the tuning curves. Finally, we demonstrated that the change in the neurons' selectivity was necessary and sufficient for a network of stochastic neurons to predict behavior when sensory cues were corrupted with noise. This study demonstrates that the shape of tuning curves can stand alone as a coding dimension of environmental statistics. In natural environments, sensory cues are often corrupted by noise and are therefore unreliable. To make the best decisions, the brain must estimate the degree to which a cue can be trusted. The behaviorally relevant neural correlates of cue reliability are debated. In this study, we used the barn owl's sound localization system to address this question. We demonstrated that the mechanisms that account for spatial selectivity also explained how neural responses changed with degraded signals. This allowed for the neurons' selectivity to capture cue reliability, influencing the population readout commanding the owl's sound-orienting behavior. Copyright © 2016 the authors 0270-6474/16/362101-10$15.00/0.

  3. Do you hear where I hear?: Isolating the individualized sound localization cues.

    Directory of Open Access Journals (Sweden)

    Griffin David Romigh

    2014-12-01

    Full Text Available It is widely acknowledged that individualized head-related transfer function (HRTF measurements are needed to adequately capture all of the 3D spatial hearing cues. However, many perceptual studies have shown that localization accuracy in the lateral dimension is only minimally decreased by the use of non-individualized head-related transfer functions. This evidence supports the idea that the individualized components of an HRTF could be isolated from those that are more general in nature. In the present study we decomposed the HRTF at each location into average, lateral and intraconic spectral components, along with an ITD in an effort to isolate the sound localization cues that are responsible for the inter-individual differences in localization performance. HRTFs for a given listener were then reconstructed systematically with components that were both individualized and non-individualized in nature, and the effect of each modification was analyzed via a virtual localization test where brief 250-ms noise bursts were rendered with the modified HRTFs. Results indicate that the cues important for individualization of HRTFs are contained almost exclusively in the intraconic portion of the HRTF spectra and localization is only minimally affected by introducing non-individualized cues into the other HRTF components. These results provide new insights into what specific inter-individual differences in head-related acoustical features are most relevant to sound localization, and provide a framework for how future human-machine interfaces might be more effectively generalized and/or individualized.

  4. Sound localization in common vampire bats: Acuity and use of the binaural time cue by a small mammal

    Science.gov (United States)

    Heffner, Rickye S.; Koay, Gimseong; Heffner, Henry E.

    2015-01-01

    Passive sound-localization acuity and the ability to use binaural time and intensity cues were determined for the common vampire bat (Desmodus rotundus). The bats were tested using a conditioned suppression/avoidance procedure in which they drank defibrinated blood from a spout in the presence of sounds from their right, but stopped drinking (i.e., broke contact with the spout) whenever a sound came from their left, thereby avoiding a mild shock. The mean minimum audible angle for three bats for a 100-ms noise burst was 13.1°—within the range of thresholds for other bats and near the mean for mammals. Common vampire bats readily localized pure tones of 20 kHz and higher, indicating they could use interaural intensity-differences. They could also localize pure tones of 5 kHz and lower, thereby demonstrating the use of interaural time-differences, despite their very small maximum interaural distance of 60 μs. A comparison of the use of locus cues among mammals suggests several implications for the evolution of sound localization and its underlying anatomical and physiological mechanisms. PMID:25618037

  5. Contribution of monaural and binaural cues to sound localization in listeners with acquired unilateral conductive hearing loss: improved directional hearing with a bone-conduction device.

    Science.gov (United States)

    Agterberg, Martijn J H; Snik, Ad F M; Hol, Myrthe K S; Van Wanrooij, Marc M; Van Opstal, A John

    2012-04-01

    Sound localization in the horizontal (azimuth) plane relies mainly on interaural time differences (ITDs) and interaural level differences (ILDs). Both are distorted in listeners with acquired unilateral conductive hearing loss (UCHL), reducing their ability to localize sound. Several studies demonstrated that UCHL listeners had some ability to localize sound in azimuth. To test whether listeners with acquired UCHL use strongly perturbed binaural difference cues, we measured localization while they listened with a sound-attenuating earmuff over their impaired ear. We also tested the potential use of monaural pinna-induced spectral-shape cues for localization in azimuth and elevation, by filling the cavities of the pinna of their better-hearing ear with a mould. These conditions were tested while a bone-conduction device (BCD), fitted to all UCHL listeners in order to provide hearing from the impaired side, was turned off. We varied stimulus presentation levels to investigate whether UCHL listeners were using sound level as an azimuth cue. Furthermore, we examined whether horizontal sound-localization abilities improved when listeners used their BCD. Ten control listeners without hearing loss demonstrated a significant decrease in their localization abilities when they listened with a monaural plug and muff. In 4/13 UCHL listeners we observed good horizontal localization of 65 dB SPL broadband noises with their BCD turned off. Localization was strongly impaired when the impaired ear was covered with the muff. The mould in the good ear of listeners with UCHL deteriorated the localization of broadband sounds presented at 45 dB SPL. This demonstrates that they used pinna cues to localize sounds presented at low levels. Our data demonstrate that UCHL listeners have learned to adapt their localization strategies under a wide variety of hearing conditions and that sound-localization abilities improved with their BCD turned on.

  6. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    Spatial localization of sound is often described as unconscious evaluation of cues given by the interaural time difference (ITD) and the spectral information of the sound that reaches the two ears. Our present knowledge suggests the hypothesis that the ITD roughly determines the cone of the perce...... independently in HRTFs used for binaural synthesis. The ITD seems to be dominant for localization in the horizontal plane even when the spectral information is severely degraded....

  7. Cues for localization in the horizontal plane

    DEFF Research Database (Denmark)

    Jeppesen, Jakob; Møller, Henrik

    2005-01-01

    manipulated in HRTFs used for binaural synthesis of sound in the horizontal plane. The manipulation of cues resulted in HRTFs with cues ranging from correct combinations of spectral information and ITDs to combinations with severely conflicting cues. Both the ITD and the spectral information seem...

  8. Using ILD or ITD Cues for Sound Source Localization and Speech Understanding in a Complex Listening Environment by Listeners with Bilateral and with Hearing-Preservation Cochlear Implants

    Science.gov (United States)

    Loiselle, Louise H.; Dorman, Michael F.; Yost, William A.; Cook, Sarah J.; Gifford, Rene H.

    2016-01-01

    Purpose: To assess the role of interaural time differences and interaural level differences in (a) sound-source localization, and (b) speech understanding in a cocktail party listening environment for listeners with bilateral cochlear implants (CIs) and for listeners with hearing-preservation CIs. Methods: Eleven bilateral listeners with MED-EL…

  9. Sound localization and occupational noise

    Directory of Open Access Journals (Sweden)

    Pedro de Lemos Menezes

    2014-02-01

    Full Text Available OBJECTIVE: The aim of this study was to determine the effects of occupational noise on sound localization in different spatial planes and frequencies among normal hearing firefighters. METHOD: A total of 29 adults with pure-tone hearing thresholds below 25 dB took part in the study. The participants were divided into a group of 19 firefighters exposed to occupational noise and a control group of 10 adults who were not exposed to such noise. All subjects were assigned a sound localization task involving 117 stimuli from 13 sound sources that were spatially distributed in horizontal, vertical, midsagittal and transverse planes. The three stimuli, which were square waves with fundamental frequencies of 500, 2,000 and 4,000 Hz, were presented at a sound level of 70 dB and were randomly repeated three times from each sound source. The angle between the speaker's axis in the same plane was 45°, and the distance to the subject was 1 m. RESULT: The results demonstrate that the sound localization ability of the firefighters was significantly lower (p<0.01 than that of the control group. CONCLUSION: Exposure to occupational noise, even when not resulting in hearing loss, may lead to a diminished ability to locate a sound source.

  10. Do top predators cue on sound production by mesopelagic prey?

    Science.gov (United States)

    Baumann-Pickering, S.; Checkley, D. M., Jr.; Demer, D. A.

    2016-02-01

    Deep-scattering layer (DSL) organisms, comprising a variety of mesopelagic fishes, and squids, siphonophores, crustaceans, and other invertebrates, are preferred prey for numerous large marine predators, e.g. cetaceans, seabirds, and fishes. Some of the DSL species migrate from depth during daylight to feed near the surface at night, transitioning during dusk and dawn. We investigated if any DSL organisms create sound, particularly during the crepuscular periods. Over several nights in summer 2015, underwater sound was recorded in the San Diego Trough using a high-frequency acoustic recording package (HARP, 10 Hz to 100 kHz), suspended from a drifting surface float. Acoustic backscatter from the DSL was monitored nearby using a calibrated multiple-frequency (38, 70, 120, and 200 kHz) split-beam echosounder (Simrad EK60) on a small boat. DSL organisms produced sound, between 300 and 1000 Hz, and the received levels were highest when the animals migrated past the recorder during ascent and descent. The DSL are globally present, so the observed acoustic phenomenon, if also ubiquitous, has wide-reaching implications. Sound travels farther than light or chemicals and thus can be sensed at greater distances by predators, prey, and mates. If sound is a characteristic feature of pelagic ecosystems, it likely plays a role in predator-prey relationships and overall ecosystem dynamics. Our new finding inspires numerous questions such as: Which, how, and why have DSL organisms evolved to create sound, for what do they use it and under what circumstances? Is sound production by DSL organisms truly ubiquitous, or does it depend on the local environment and species composition? How may sound production and perception be adapted to a changing environment? Do predators react to changes in sound? Can sound be used to quantify the composition of mixed-species assemblages, component densities and abundances, and hence be used in stock assessment or predictive modeling?

  11. Numerical value biases sound localization.

    Science.gov (United States)

    Golob, Edward J; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R

    2017-12-08

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perceptual judgments of sound location as a function of digit magnitude (1-9). The main finding was that for stimuli presented near the median plane there was a linear left-to-right bias for localizing smaller-to-larger numbers. At lateral locations there was a central-eccentric location bias in the pointing task, and either a bias restricted to the smaller numbers (left side) or no significant number bias (right side). Prior number location also biased subsequent number judgments towards the opposite side. Findings support a lexical influence on auditory spatial perception, with a linear mapping near midline and more complex relations at lateral locations. Results may reflect coding of dedicated spatial channels, with two representing lateral positions in each hemispace, and the midline area represented by either their overlap or a separate third channel.

  12. Material sound source localization through headphones

    Science.gov (United States)

    Dunai, Larisa; Peris-Fajarnes, Guillermo; Lengua, Ismael Lengua; Montaña, Ignacio Tortajada

    2012-09-01

    In the present paper a study of sound localization is carried out, considering two different sounds emitted from different hit materials (wood and bongo) as well as a Delta sound. The motivation of this research is to study how humans localize sounds coming from different materials, with the purpose of a future implementation of the acoustic sounds with better localization features in navigation aid systems or training audio-games suited for blind people. Wood and bongo sounds are recorded after hitting two objects made of these materials. Afterwards, they are analysed and processed. On the other hand, the Delta sound (click) is generated by using the Adobe Audition software, considering a frequency of 44.1 kHz. All sounds are analysed and convolved with previously measured non-individual Head-Related Transfer Functions both for an anechoic environment and for an environment with reverberation. The First Choice method is used in this experiment. Subjects are asked to localize the source position of the sound listened through the headphones, by using a graphic user interface. The analyses of the recorded data reveal that no significant differences are obtained either when considering the nature of the sounds (wood, bongo, Delta) or their environmental context (with or without reverberation). The localization accuracies for the anechoic sounds are: wood 90.19%, bongo 92.96% and Delta sound 89.59%, whereas for the sounds with reverberation the results are: wood 90.59%, bongo 92.63% and Delta sound 90.91%. According to these data, we can conclude that even when considering the reverberation effect, the localization accuracy does not significantly increase.

  13. Horizontal sound localization in cochlear implant users with a contralateral hearing aid.

    Science.gov (United States)

    Veugen, Lidwien C E; Hendrikse, Maartje M E; van Wanrooij, Marc M; Agterberg, Martijn J H; Chalupper, Josef; Mens, Lucas H M; Snik, Ad F M; John van Opstal, A

    2016-06-01

    Interaural differences in sound arrival time (ITD) and in level (ILD) enable us to localize sounds in the horizontal plane, and can support source segregation and speech understanding in noisy environments. It is uncertain whether these cues are also available to hearing-impaired listeners who are bimodally fitted, i.e. with a cochlear implant (CI) and a contralateral hearing aid (HA). Here, we assessed sound localization behavior of fourteen bimodal listeners, all using the same Phonak HA and an Advanced Bionics CI processor, matched with respect to loudness growth. We aimed to determine the availability and contribution of binaural (ILDs, temporal fine structure and envelope ITDs) and monaural (loudness, spectral) cues to horizontal sound localization in bimodal listeners, by systematically varying the frequency band, level and envelope of the stimuli. The sound bandwidth had a strong effect on the localization bias of bimodal listeners, although localization performance was typically poor for all conditions. Responses could be systematically changed by adjusting the frequency range of the stimulus, or by simply switching the HA and CI on and off. Localization responses were largely biased to one side, typically the CI side for broadband and high-pass filtered sounds, and occasionally to the HA side for low-pass filtered sounds. HA-aided thresholds better than 45 dB HL in the frequency range of the stimulus appeared to be a prerequisite, but not a guarantee, for the ability to indicate sound source direction. We argue that bimodal sound localization is likely based on ILD cues, even at frequencies below 1500 Hz for which the natural ILDs are small. These cues are typically perturbed in bimodal listeners, leading to a biased localization percept of sounds. The high accuracy of some listeners could result from a combination of sufficient spectral overlap and loudness balance in bimodal hearing. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Localization Performance of Multiple Vibrotactile Cues on Both Arms.

    Science.gov (United States)

    Wang, Dangxiao; Peng, Cong; Afzal, Naqash; Li, Weiang; Wu, Dong; Zhang, Yuru

    2018-01-01

    To present information using vibrotactile stimuli in wearable devices, it is fundamental to understand human performance of localizing vibrotactile cues across the skin surface. In this paper, we studied human ability to identify locations of multiple vibrotactile cues activated simultaneously on both arms. Two haptic bands were mounted in proximity to the elbow and shoulder joints on each arm, and two vibrotactile motors were mounted on each band to provide vibration cues to the dorsal and palmar side of the arm. The localization performance under four conditions were compared, with the number of the simultaneously activated cues varying from one to four in each condition. Experimental results illustrate that the rate of correct localization decreases linearly with the increase in the number of activated cues. It was 27.8 percent for three activated cues, and became even lower for four activated cues. An analysis of the correct rate and error patterns show that the layout of vibrotactile cues can have significant effects on the localization performance of multiple vibrotactile cues. These findings might provide guidelines for using vibrotactile cues to guide the simultaneous motion of multiple joints on both arms.

  15. The Influence of Visual Cues on Sound Externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    while listeners wore both earplugs and blindfolds. Half of the listeners were then blindfolded during testing but were provided auditory awareness of the room via a controlled noise source (condition A). The other half could see the room but were shielded from room-related acoustic input and tested......Background: The externalization of virtual sounds reproduced via binaural headphone-based auralization systems has been reported to be less robust when the listening environment differs from the room in which binaural room impulse responses (BRIRs) were recorded. It has been debated whether.......Methods: Eighteen naïve listeners rated the externalization of virtual stimuli in terms of perceived distance, azimuthal localization, and compactness in three rooms: 1) a standard IEC listening room, 2) a small reverberant room, and 3) a large dry room. Before testing, individual BRIRs were recorded in room 1...

  16. Numerical value biases sound localization

    OpenAIRE

    Golob, Edward J.; Lewald, Jörg; Getzmann, Stephan; Mock, Jeffrey R.

    2017-01-01

    Speech recognition starts with representations of basic acoustic perceptual features and ends by categorizing the sound based on long-term memory for word meaning. However, little is known about whether the reverse pattern of lexical influences on basic perception can occur. We tested for a lexical influence on auditory spatial perception by having subjects make spatial judgments of number stimuli. Four experiments used pointing or left/right 2-alternative forced choice tasks to examine perce...

  17. Hearing in alpacas (Vicugna pacos): audiogram, localization acuity, and use of binaural locus cues.

    Science.gov (United States)

    Heffner, Rickye S; Koay, Gimseong; Heffner, Henry E

    2014-02-01

    Behavioral audiograms and sound localization abilities were determined for three alpacas (Vicugna pacos). Their hearing at a level of 60 dB sound pressure level (SPL) (re 20 μPa) extended from 40 Hz to 32.8 kHz, a range of 9.7 octaves. They were most sensitive at 8 kHz, with an average threshold of -0.5 dB SPL. The minimum audible angle around the midline for 100-ms broadband noise was 23°, indicating relatively poor localization acuity and potentially supporting the finding that animals with broad areas of best vision have poorer sound localization acuity. The alpacas were able to localize low-frequency pure tones, indicating that they can use the binaural phase cue, but they were unable to localize pure tones above the frequency of phase ambiguity, thus indicating complete inability to use the binaural intensity-difference cue. In contrast, the alpacas relied on their high-frequency hearing for pinna cues; they could discriminate front-back sound sources using 3-kHz high-pass noise, but not 3-kHz low-pass noise. These results are compared to those of other hoofed mammals and to mammals more generally.

  18. Experience with speech sounds is not necessary for cue trading by budgerigars (Melopsittacus undulatus.

    Directory of Open Access Journals (Sweden)

    Mary Flaherty

    Full Text Available The influence of experience with human speech sounds on speech perception in budgerigars, vocal mimics whose speech exposure can be tightly controlled in a laboratory setting, was measured. Budgerigars were divided into groups that differed in auditory exposure and then tested on a cue-trading identification paradigm with synthetic speech. Phonetic cue trading is a perceptual phenomenon observed when changes on one cue dimension are offset by changes in another cue dimension while still maintaining the same phonetic percept. The current study examined whether budgerigars would trade the cues of voice onset time (VOT and the first formant onset frequency when identifying syllable initial stop consonants and if this would be influenced by exposure to speech sounds. There were a total of four different exposure groups: No speech exposure (completely isolated, Passive speech exposure (regular exposure to human speech, and two Speech-trained groups. After the exposure period, all budgerigars were tested for phonetic cue trading using operant conditioning procedures. Birds were trained to peck keys in response to different synthetic speech sounds that began with "d" or "t" and varied in VOT and frequency of the first formant at voicing onset. Once training performance criteria were met, budgerigars were presented with the entire intermediate series, including ambiguous sounds. Responses on these trials were used to determine which speech cues were used, if a trading relation between VOT and the onset frequency of the first formant was present, and whether speech exposure had an influence on perception. Cue trading was found in all birds and these results were largely similar to those of a group of humans. Results indicated that prior speech experience was not a requirement for cue trading by budgerigars. The results are consistent with theories that explain phonetic cue trading in terms of a rich auditory encoding of the speech signal.

  19. Discrimination and streaming of speech sounds based on differences in interaural and spectral cues.

    Science.gov (United States)

    David, Marion; Lavandier, Mathieu; Grimault, Nicolas; Oxenham, Andrew J

    2017-09-01

    Differences in spatial cues, including interaural time differences (ITDs), interaural level differences (ILDs) and spectral cues, can lead to stream segregation of alternating noise bursts. It is unknown how effective such cues are for streaming sounds with realistic spectro-temporal variations. In particular, it is not known whether the high-frequency spectral cues associated with elevation remain sufficiently robust under such conditions. To answer these questions, sequences of consonant-vowel tokens were generated and filtered by non-individualized head-related transfer functions to simulate the cues associated with different positions in the horizontal and median planes. A discrimination task showed that listeners could discriminate changes in interaural cues both when the stimulus remained constant and when it varied between presentations. However, discrimination of changes in spectral cues was much poorer in the presence of stimulus variability. A streaming task, based on the detection of repeated syllables in the presence of interfering syllables, revealed that listeners can use both interaural and spectral cues to segregate alternating syllable sequences, despite the large spectro-temporal differences between stimuli. However, only the full complement of spatial cues (ILDs, ITDs, and spectral cues) resulted in obligatory streaming in a task that encouraged listeners to integrate the tokens into a single stream.

  20. Magpies can use local cues to retrieve their food caches.

    Science.gov (United States)

    Feenders, Gesa; Smulders, Tom V

    2011-03-01

    Much importance has been placed on the use of spatial cues by food-hoarding birds in the retrieval of their caches. In this study, we investigate whether food-hoarding birds can be trained to use local cues ("beacons") in their cache retrieval. We test magpies (Pica pica) in an active hoarding-retrieval paradigm, where local cues are always reliable, while spatial cues are not. Our results show that the birds use the local cues to retrieve their caches, even when occasionally contradicting spatial information is available. The design of our study does not allow us to test rigorously whether the birds prefer using local over spatial cues, nor to investigate the process through which they learn to use local cues. We furthermore provide evidence that magpies develop landmark preferences, which improve their retrieval accuracy. Our findings support the hypothesis that birds are flexible in their use of memory information, using a combination of the most reliable or salient information to retrieve their caches. © Springer-Verlag 2010

  1. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds

    Science.gov (United States)

    Dietz, Mathias; Marquardt, Torsten; Salminen, Nelli H.; McAlpine, David

    2013-01-01

    The ability to locate the direction of a target sound in a background of competing sources is critical to the survival of many species and important for human communication. Nevertheless, brain mechanisms that provide for such accurate localization abilities remain poorly understood. In particular, it remains unclear how the auditory brain is able to extract reliable spatial information directly from the source when competing sounds and reflections dominate all but the earliest moments of the sound wave reaching each ear. We developed a stimulus mimicking the mutual relationship of sound amplitude and binaural cues, characteristic to reverberant speech. This stimulus, named amplitude modulated binaural beat, allows for a parametric and isolated change of modulation frequency and phase relations. Employing magnetoencephalography and psychoacoustics it is demonstrated that the auditory brain uses binaural information in the stimulus fine structure only during the rising portion of each modulation cycle, rendering spatial information recoverable in an otherwise unlocalizable sound. The data suggest that amplitude modulation provides a means of “glimpsing” low-frequency spatial cues in a manner that benefits listening in noisy or reverberant environments. PMID:23980161

  2. Sound localization under perturbed binaural hearing.

    NARCIS (Netherlands)

    Wanrooij, M.M. van; Opstal, A.J. van

    2007-01-01

    This paper reports on the acute effects of a monaural plug on directional hearing in the horizontal (azimuth) and vertical (elevation) planes of human listeners. Sound localization behavior was tested with rapid head-orienting responses toward brief high-pass filtered (>3 kHz; HP) and broadband

  3. Effects of incongruent auditory and visual room-related cues on sound externalization

    DEFF Research Database (Denmark)

    Carvajal, Juan Camilo Gil; Santurette, Sébastien; Cubick, Jens

    Sounds presented via headphones are typically perceived inside the head. However, the illusion of a sound source located out in space away from the listener’s head can be generated with binaural headphone-based auralization systems by convolving anechoic sound signals with a binaural room impulse...... response (BRIR) measured with miniature microphones placed in the listener’s ear canals. Sound externalization of such virtual sounds can be very convincing and robust but there have been reports that the illusion might break down when the listening environment differs from the room in which the BRIRs were...... recorded [1,2,3]. This may be due to incongruent auditory cues between the recording and playback room during sound reproduction [2]. Alternatively, an expectation effect caused by the visual impression of the room may affect the position of the perceived auditory image [3]. Here, we systematically...

  4. Interactive jewellery as memory cue : designing a sound locket for individual reminiscence

    NARCIS (Netherlands)

    Niemantsverdriet, K.; Versteeg, M.F.

    2016-01-01

    In this paper we describe the design of Memento: an interactive sound locket for individual reminiscence that triggers a similar sense of intimacy and values as its non-technological predecessor. Jewellery often forms a cue for autobiographical memory. In this work we investigate the role that

  5. Local sleep spindle modulations in relation to specific memory cues

    NARCIS (Netherlands)

    Cox, R.; Hofman, W.F.; de Boer, M.; Talamini, L.M.

    2014-01-01

    Sleep spindles have been connected to memory processes in various ways. In addition, spindles appear to be modulated at the local cortical network level. We investigated whether cueing specific memories during sleep leads to localized spindle modulations in humans. During learning of word-location

  6. Sound Localization Strategies in Three Predators

    DEFF Research Database (Denmark)

    Carr, Catherine E; Christensen-Dalsgaard, Jakob

    2015-01-01

    . Despite the similar organization of their auditory systems, archosaurs and lizards use different strategies for encoding the ITDs that underlie localization of sound in azimuth. Barn owls encode ITD information using a place map, which is composed of neurons serving as labeled lines tuned for preferred......In this paper, we compare some of the neural strategies for sound localization and encoding interaural time differences (ITDs) in three predatory species of Reptilia, alligators, barn owls and geckos. Birds and crocodilians are sister groups among the extant archosaurs, while geckos are lepidosaurs...... spatial locations, while geckos may use a meter strategy or population code composed of broadly sensitive neurons that represent ITD via changes in the firing rate....

  7. Spectral and temporal cues for perception of material and action categories in impacted sound sources

    DEFF Research Database (Denmark)

    Hjortkjær, Jens; McAdams, Stephen

    2016-01-01

    In two experiments, similarity ratings and categorization performance with recorded impact sounds representing three material categories (wood, metal, glass) being manipulated by three different categories of action (drop, strike, rattle) were examined. Previous research focusing on single impact...... correlated with the pattern of confusion in categorization judgments. Listeners tended to confuse materials with similar spectral centroids, and actions with similar temporal centroids and onset densities. To confirm the influence of these different features, spectral cues were removed by applying...

  8. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs.

    Science.gov (United States)

    Donohue, Kelly C; Spencer, Rebecca M C

    2011-06-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening and recall was probed following an interval with overnight sleep. Participants encoded the pairs with the sound of "the ocean" from a sound machine. The first group slept with this sound; the second group slept with a different sound ("rain"); and the third group slept with no sound. Sleeping with sound had no impact on subsequent recall. Although a null result, this work provides an important test of the implications of context effects on sleep-dependent memory consolidation.

  9. Continuous Re-Exposure to Environmental Sound Cues During Sleep Does Not Improve Memory for Semantically Unrelated Word Pairs

    OpenAIRE

    Donohue, Kelly C.; Spencer, Rebecca M. C.

    2011-01-01

    Two recent studies illustrated that cues present during encoding can enhance recall if re-presented during sleep. This suggests an academic strategy. Such effects have only been demonstrated with spatial learning and cue presentation was isolated to slow wave sleep (SWS). The goal of this study was to examine whether sounds enhance sleep-dependent consolidation of a semantic task if the sounds are re-presented continuously during sleep. Participants encoded a list of word pairs in the evening...

  10. Food approach conditioning and discrimination learning using sound cues in benthic sharks.

    Science.gov (United States)

    Vila Pouca, Catarina; Brown, Culum

    2018-07-01

    The marine environment is filled with biotic and abiotic sounds. Some of these sounds predict important events that influence fitness while others are unimportant. Individuals can learn specific sound cues and 'soundscapes' and use them for vital activities such as foraging, predator avoidance, communication and orientation. Most research with sounds in elasmobranchs has focused on hearing thresholds and attractiveness to sound sources, but very little is known about their abilities to learn about sounds, especially in benthic species. Here we investigated if juvenile Port Jackson sharks could learn to associate a musical stimulus with a food reward, discriminate between two distinct musical stimuli, and whether individual personality traits were linked to cognitive performance. Five out of eight sharks were successfully conditioned to associate a jazz song with a food reward delivered in a specific corner of the tank. We observed repeatable individual differences in activity and boldness in all eight sharks, but these personality traits were not linked to the learning performance assays we examined. These sharks were later trained in a discrimination task, where they had to distinguish between the same jazz and a novel classical music song, and swim to opposite corners of the tank according to the stimulus played. The sharks' performance to the jazz stimulus declined to chance levels in the discrimination task. Interestingly, some sharks developed a strong side bias to the right, which in some cases was not the correct side for the jazz stimulus.

  11. Second Sound for Heat Source Localization

    CERN Document Server

    Vennekate, Hannes; Uhrmacher, Michael; Quadt, Arnulf; Grosse-Knetter, Joern

    2011-01-01

    Defects on the surface of superconducting cavities can limit their accelerating gradient by localized heating. This results in a phase transition to the normal conduction state | a quench. A new application, involving Oscillating Superleak Transducers (OST) to locate such quench inducing heat spots on the surface of the cavities, has been developed by D. Hartill et al. at Cornell University in 2008. The OSTs enable the detection of heat transfer via second sound in super uid helium. This thesis presents new results on the analysis of their signal. Its behavior has been studied for dierent circumstances at setups at the University of Gottingen and at CERN. New approaches for an automated signal processing have been developed. Furthermore, a rst test setup for a single-cell Superconducting Proton Linac (SPL) cavity has been prepared. Recommendations of a better signal retrieving for its operation are presented.

  12. Contralateral routing of signals disrupts monaural level and spectral cues to sound localisation on the horizontal plane.

    Science.gov (United States)

    Pedley, Adam J; Kitterick, Pádraig T

    2017-09-01

    Contra-lateral routing of signals (CROS) devices re-route sound between the deaf and hearing ears of unilaterally-deaf individuals. This rerouting would be expected to disrupt access to monaural level cues that can support monaural localisation in the horizontal plane. However, such a detrimental effect has not been confirmed by clinical studies of CROS use. The present study aimed to exercise strict experimental control over the availability of monaural cues to localisation in the horizontal plane and the fitting of the CROS device to assess whether signal routing can impair the ability to locate sources of sound and, if so, whether CROS selectively disrupts monaural level or spectral cues to horizontal location, or both. Unilateral deafness and CROS device use were simulated in twelve normal hearing participants. Monaural recordings of broadband white noise presented from three spatial locations (-60°, 0°, and +60°) were made in the ear canal of a model listener using a probe microphone with and without a CROS device. The recordings were presented to participants via an insert earphone placed in their right ear. The recordings were processed to disrupt either monaural level or spectral cues to horizontal sound location by roving presentation level or the energy across adjacent frequency bands, respectively. Localisation ability was assessed using a three-alternative forced-choice spatial discrimination task. Participants localised above chance levels in all conditions. Spatial discrimination accuracy was poorer when participants only had access to monaural spectral cues compared to when monaural level cues were available. CROS use impaired localisation significantly regardless of whether level or spectral cues were available. For both cues, signal re-routing had a detrimental effect on the ability to localise sounds originating from the side of the deaf ear (-60°). CROS use also impaired the ability to use level cues to localise sounds originating from

  13. Sound source localization and segregation with internally coupled ears

    DEFF Research Database (Denmark)

    Bee, Mark A; Christensen-Dalsgaard, Jakob

    2016-01-01

    to their correct sources (sound source segregation). Here, we review anatomical, biophysical, neurophysiological, and behavioral studies aimed at identifying how the internally coupled ears of frogs contribute to sound source localization and segregation. Our review focuses on treefrogs in the genus Hyla......, as they are the most thoroughly studied frogs in terms of sound source localization and segregation. They also represent promising model systems for future work aimed at understanding better how internally coupled ears contribute to sound source localization and segregation. We conclude our review by enumerating...

  14. Looking at the ventriloquist: visual outcome of eye movements calibrates sound localization.

    Directory of Open Access Journals (Sweden)

    Daniel S Pages

    Full Text Available A general problem in learning is how the brain determines what lesson to learn (and what lessons not to learn. For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a 'guess and check' heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain's reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

  15. Sound localization in the presence of one or two distracters

    NARCIS (Netherlands)

    Langendijk, E.H.A.; Kistler, D.J.; Wightman, F.L

    2001-01-01

    Localizing a target sound can be a challenge when one or more distracter sounds are present at the same time. This study measured the effect of distracter position on target localization for one distracter (17 positions) and two distracters (21 combinations of 17 positions). Listeners were

  16. Reef Sound as an Orientation Cue for Shoreward Migration by Pueruli of the Rock Lobster, Jasus edwardsii.

    Science.gov (United States)

    Hinojosa, Ivan A; Green, Bridget S; Gardner, Caleb; Hesse, Jan; Stanley, Jenni A; Jeffs, Andrew G

    2016-01-01

    The post-larval or puerulus stage of spiny, or rock, lobsters (Palinuridae) swim many kilometres from open oceans into coastal waters where they subsequently settle. The orientation cues used by the puerulus for this migration are unclear, but are presumed to be critical to finding a place to settle. Understanding this process may help explain the biological processes of dispersal and settlement, and be useful for developing realistic dispersal models. In this study, we examined the use of reef sound as an orientation cue by the puerulus stage of the southern rock lobster, Jasus edwardsii. Experiments were conducted using in situ binary choice chambers together with replayed recording of underwater reef sound. The experiment was conducted in a sandy lagoon under varying wind conditions. A significant proportion of puerulus (69%) swam towards the reef sound in calm wind conditions. However, in windy conditions (>25 m s-1) the orientation behaviour appeared to be less consistent with the inclusion of these results, reducing the overall proportion of pueruli that swam towards the reef sound (59.3%). These results resolve previous speculation that underwater reef sound is used as an orientation cue in the shoreward migration of the puerulus of spiny lobsters, and suggest that sea surface winds may moderate the ability of migrating pueruli to use this cue to locate coastal reef habitat to settle. Underwater sound may increase the chance of successful settlement and survival of this valuable species.

  17. Local figure-ground cues are valid for natural images.

    Science.gov (United States)

    Fowlkes, Charless C; Martin, David R; Malik, Jitendra

    2007-06-08

    Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.

  18. Sound localization with head movement: implications for 3-d audio displays.

    Directory of Open Access Journals (Sweden)

    Ken Ian McAnally

    2014-08-01

    Full Text Available Previous studies have shown that the accuracy of sound localization is improved if listeners are allowed to move their heads during signal presentation. This study describes the function relating localization accuracy to the extent of head movement in azimuth. Sounds that are difficult to localize were presented in the free field from sources at a wide range of azimuths and elevations. Sounds remained active until the participants’ heads had rotated through windows ranging in width of 2°, 4°, 8°, 16°, 32°, or 64° of azimuth. Error in determining sound-source elevation and the rate of front/back confusion were found to decrease with increases in azimuth window width. Error in determining sound-source lateral angle was not found to vary with azimuth window width. Implications for 3-d audio displays: The utility of a 3-d audio display for imparting spatial information is likely to be improved if operators are able to move their heads during signal presentation. Head movement may compensate in part for a paucity of spectral cues to sound-source location resulting from limitations in either the audio signals presented or the directional filters (i.e., head-related transfer functions used to generate a display. However, head movements of a moderate size (i.e., through around 32° of azimuth may be required to ensure that spatial information is conveyed with high accuracy.

  19. Sound Localization in Patients With Congenital Unilateral Conductive Hearing Loss With a Transcutaneous Bone Conduction Implant.

    Science.gov (United States)

    Vyskocil, Erich; Liepins, Rudolfs; Kaider, Alexandra; Blineder, Michaela; Hamzavi, Sasan

    2017-03-01

    There is no consensus regarding the benefit of implantable hearing aids in congenital unilateral conductive hearing loss (UCHL). This study aimed to measure sound source localization performance in patients with congenital UCHL and contralateral normal hearing who received a new bone conduction implant. Evaluation of within-subject performance differences for sound source localization in a horizontal plane. Tertiary referral center. Five patients with atresia of the external auditory canal and contralateral normal hearing implanted with transcutaneous bone conduction implant at the Medical University of Vienna were tested. Activated/deactivated implant. Sound source localization test; localization performance quantified using the root mean square (RMS) error. Sound source localization ability was highly variable among individual subjects, with RMS errors ranging from 21 to 40 degrees. Horizontal plane localization performance in aided conditions showed statistically significant improvement compared with the unaided conditions, with RMS errors ranging from 17 to 27 degrees. The mean RMS error decreased by a factor of 0.71 (p conduction implant. Some patients with congenital UCHL might be capable of developing improved horizontal plane localization abilities with the binaural cues provided by this device.

  20. Spherical loudspeaker array for local active control of sound.

    Science.gov (United States)

    Rafaely, Boaz

    2009-05-01

    Active control of sound has been employed to reduce noise levels around listeners' head using destructive interference from noise-canceling sound sources. Recently, spherical loudspeaker arrays have been studied as multiple-channel sound sources, capable of generating sound fields with high complexity. In this paper, the potential use of a spherical loudspeaker array for local active control of sound is investigated. A theoretical analysis of the primary and secondary sound fields around a spherical sound source reveals that the natural quiet zones for the spherical source have a shell-shape. Using numerical optimization, quiet zones with other shapes are designed, showing potential for quiet zones with extents that are significantly larger than the well-known limit of a tenth of a wavelength for monopole sources. The paper presents several simulation examples showing quiet zones in various configurations.

  1. Learning to Localize Sound with a Lizard Ear Model

    DEFF Research Database (Denmark)

    Shaikh, Danish; Hallam, John; Christensen-Dalsgaard, Jakob

    The peripheral auditory system of a lizard is strongly directional in the azimuth plane due to the acoustical coupling of the animal's two eardrums. This feature by itself is insufficient to accurately localize sound as the extracted directional information cannot be directly mapped to the sound...

  2. Ambient Sound-Based Collaborative Localization of Indeterministic Devices

    NARCIS (Netherlands)

    Kamminga, Jacob Wilhelm; Le Viet Duc, L Duc; Havinga, Paul J.M.

    2016-01-01

    Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and

  3. Local spectral anisotropy is a valid cue for figure–ground organization in natural scenes

    OpenAIRE

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-01-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which...

  4. Development of Sound Localization Strategies in Children with Bilateral Cochlear Implants.

    Directory of Open Access Journals (Sweden)

    Yi Zheng

    Full Text Available Localizing sounds in our environment is one of the fundamental perceptual abilities that enable humans to communicate, and to remain safe. Because the acoustic cues necessary for computing source locations consist of differences between the two ears in signal intensity and arrival time, sound localization is fairly poor when a single ear is available. In adults who become deaf and are fitted with cochlear implants (CIs sound localization is known to improve when bilateral CIs (BiCIs are used compared to when a single CI is used. The aim of the present study was to investigate the emergence of spatial hearing sensitivity in children who use BiCIs, with a particular focus on the development of behavioral localization patterns when stimuli are presented in free-field horizontal acoustic space. A new analysis was implemented to quantify patterns observed in children for mapping acoustic space to a spatially relevant perceptual representation. Children with normal hearing were found to distribute their responses in a manner that demonstrated high spatial sensitivity. In contrast, children with BiCIs tended to classify sound source locations to the left and right; with increased bilateral hearing experience, they developed a perceptual map of space that was better aligned with the acoustic space. The results indicate experience-dependent refinement of spatial hearing skills in children with CIs. Localization strategies appear to undergo transitions from sound source categorization strategies to more fine-grained location identification strategies. This may provide evidence for neural plasticity, with implications for training of spatial hearing ability in CI users.

  5. Emotional pictures and sounds: A review of multimodal interactions of emotion cues in multiple domains

    Directory of Open Access Journals (Sweden)

    Antje B M Gerdes

    2014-12-01

    Full Text Available In everyday life, multiple sensory channels jointly trigger emotional experiences and one channel may alter processing in another channel. For example, seeing an emotional facial expression and hearing the voice’s emotional tone will jointly create the emotional experience. This example, where auditory and visual input is related to social communication, has gained considerable attention by researchers. However, interactions of visual and auditory emotional information are not limited to social communication but can extend to much broader contexts including human, animal, and environmental cues. In this article, we review current research on audiovisual emotion processing beyond face-voice stimuli to develop a broader perspective on multimodal interactions in emotion processing. We argue that current concepts of multimodality should be extended in considering an ecologically valid variety of stimuli in audiovisual emotion processing. Therefore, we provide an overview of studies in which emotional sounds and interactions with complex pictures of scenes were investigated. In addition to behavioral studies, we focus on neuroimaging, electro- and peripher-physiological findings. Furthermore, we integrate these findings and identify similarities or differences. We conclude with suggestions for future research.

  6. A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals.

    Directory of Open Access Journals (Sweden)

    Frédéric Gougoux

    2005-02-01

    Full Text Available Blind individuals often demonstrate enhanced nonvisual perceptual abilities. However, the neural substrate that underlies this improved performance remains to be fully understood. An earlier behavioral study demonstrated that some early-blind people localize sounds more accurately than sighted controls using monaural cues. In order to investigate the neural basis of these behavioral differences in humans, we carried out functional imaging studies using positron emission tomography and a speaker array that permitted pseudo-free-field presentations within the scanner. During binaural sound localization, a sighted control group showed decreased cerebral blood flow in the occipital lobe, which was not seen in early-blind individuals. During monaural sound localization (one ear plugged, the subgroup of early-blind subjects who were behaviorally superior at sound localization displayed two activation foci in the occipital cortex. This effect was not seen in blind persons who did not have superior monaural sound localization abilities, nor in sighted individuals. The degree of activation of one of these foci was strongly correlated with sound localization accuracy across the entire group of blind subjects. The results show that those blind persons who perform better than sighted persons recruit occipital areas to carry out auditory localization under monaural conditions. We therefore conclude that computations carried out in the occipital cortex specifically underlie the enhanced capacity to use monaural cues. Our findings shed light not only on intermodal compensatory mechanisms, but also on individual differences in these mechanisms and on inhibitory patterns that differ between sighted individuals and those deprived of vision early in life.

  7. A dominance hierarchy of auditory spatial cues in barn owls.

    Directory of Open Access Journals (Sweden)

    Ilana B Witten

    2010-04-01

    Full Text Available Barn owls integrate spatial information across frequency channels to localize sounds in space.We presented barn owls with synchronous sounds that contained different bands of frequencies (3-5 kHz and 7-9 kHz from different locations in space. When the owls were confronted with the conflicting localization cues from two synchronous sounds of equal level, their orienting responses were dominated by one of the sounds: they oriented toward the location of the low frequency sound when the sources were separated in azimuth; in contrast, they oriented toward the location of the high frequency sound when the sources were separated in elevation. We identified neural correlates of this behavioral effect in the optic tectum (OT, superior colliculus in mammals, which contains a map of auditory space and is involved in generating orienting movements to sounds. We found that low frequency cues dominate the representation of sound azimuth in the OT space map, whereas high frequency cues dominate the representation of sound elevation.We argue that the dominance hierarchy of localization cues reflects several factors: 1 the relative amplitude of the sound providing the cue, 2 the resolution with which the auditory system measures the value of a cue, and 3 the spatial ambiguity in interpreting the cue. These same factors may contribute to the relative weighting of sound localization cues in other species, including humans.

  8. The natural history of sound localization in mammals – a story of neuronal inhibition

    Directory of Open Access Journals (Sweden)

    Benedikt eGrothe

    2014-10-01

    Full Text Available Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems.Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  9. The natural history of sound localization in mammals--a story of neuronal inhibition.

    Science.gov (United States)

    Grothe, Benedikt; Pecka, Michael

    2014-01-01

    Our concepts of sound localization in the vertebrate brain are widely based on the general assumption that both the ability to detect air-borne sounds and the neuronal processing are homologous in archosaurs (present day crocodiles and birds) and mammals. Yet studies repeatedly report conflicting results on the neuronal circuits and mechanisms, in particular the role of inhibition, as well as the coding strategies between avian and mammalian model systems. Here we argue that mammalian and avian phylogeny of spatial hearing is characterized by a convergent evolution of hearing air-borne sounds rather than by homology. In particular, the different evolutionary origins of tympanic ears and the different availability of binaural cues in early mammals and archosaurs imposed distinct constraints on the respective binaural processing mechanisms. The role of synaptic inhibition in generating binaural spatial sensitivity in mammals is highlighted, as it reveals a unifying principle of mammalian circuit design for encoding sound position. Together, we combine evolutionary, anatomical and physiological arguments for making a clear distinction between mammalian processing mechanisms and coding strategies and those of archosaurs. We emphasize that a consideration of the convergent nature of neuronal mechanisms will significantly increase the explanatory power of studies of spatial processing in both mammals and birds.

  10. The Effect of Microphone Placement on Interaural Level Differences and Sound Localization Across the Horizontal Plane in Bilateral Cochlear Implant Users.

    Science.gov (United States)

    Jones, Heath G; Kan, Alan; Litovsky, Ruth Y

    2016-01-01

    This study examined the effect of microphone placement on the interaural level differences (ILDs) available to bilateral cochlear implant (BiCI) users, and the subsequent effects on horizontal-plane sound localization. Virtual acoustic stimuli for sound localization testing were created individually for eight BiCI users by making acoustic transfer function measurements for microphones placed in the ear (ITE), behind the ear (BTE), and on the shoulders (SHD). The ILDs across source locations were calculated for each placement to analyze their effect on sound localization performance. Sound localization was tested using a repeated-measures, within-participant design for the three microphone placements. The ITE microphone placement provided significantly larger ILDs compared to BTE and SHD placements, which correlated with overall localization errors. However, differences in localization errors across the microphone conditions were small. The BTE microphones worn by many BiCI users in everyday life do not capture the full range of acoustic ILDs available, and also reduce the change in cue magnitudes for sound sources across the horizontal plane. Acute testing with an ITE placement reduced sound localization errors along the horizontal plane compared to the other placements in some patients. Larger improvements may be observed if patients had more experience with the new ILD cues provided by an ITE placement.

  11. How to generate a sound-localization map in fish

    Science.gov (United States)

    van Hemmen, J. Leo

    2015-03-01

    How sound localization is represented in the fish brain is a research field largely unbiased by theoretical analysis and computational modeling. Yet, there is experimental evidence that the axes of particle acceleration due to underwater sound are represented through a map in the midbrain of fish, e.g., in the torus semicircularis of the rainbow trout (Wubbels et al. 1997). How does such a map arise? Fish perceive pressure gradients by their three otolithic organs, each of which comprises a dense, calcareous, stone that is bathed in endolymph and attached to a sensory epithelium. In rainbow trout, the sensory epithelia of left and right utricle lie in the horizontal plane and consist of hair cells with equally distributed preferred orientations. We model the neuronal response of this system on the basis of Schuijf's vector detection hypothesis (Schuijf et al. 1975) and introduce a temporal spike code of sound direction, where optimality of hair cell orientation θj with respect to the acceleration direction θs is mapped onto spike phases via a von-Mises distribution. By learning to tune in to the earliest synchronized activity, nerve cells in the midbrain generate a map under the supervision of a locally excitatory, yet globally inhibitory visual teacher. Work done in collaboration with Daniel Begovic. Partially supported by BCCN - Munich.

  12. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    Science.gov (United States)

    Moessnang, Carolin; Finkelmeyer, Andreas; Vossen, Alexandra; Schneider, Frank; Habel, Ute

    2011-01-01

    Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal. A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment. No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research. The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans. © 2011 Moessnang et al.

  13. Assessing implicit odor localization in humans using a cross-modal spatial cueing paradigm.

    Directory of Open Access Journals (Sweden)

    Carolin Moessnang

    Full Text Available Navigation based on chemosensory information is one of the most important skills in the animal kingdom. Studies on odor localization suggest that humans have lost this ability. However, the experimental approaches used so far were limited to explicit judgements, which might ignore a residual ability for directional smelling on an implicit level without conscious appraisal.A novel cueing paradigm was developed in order to determine whether an implicit ability for directional smelling exists. Participants performed a visual two-alternative forced choice task in which the target was preceded either by a side-congruent or a side-incongruent olfactory spatial cue. An explicit odor localization task was implemented in a second experiment.No effect of cue congruency on mean reaction times could be found. However, a time by condition interaction emerged, with significantly slower responses to congruently compared to incongruently cued targets at the beginning of the experiment. This cueing effect gradually disappeared throughout the course of the experiment. In addition, participants performed at chance level in the explicit odor localization task, thus confirming the results of previous research.The implicit cueing task suggests the existence of spatial information processing in the olfactory system. Response slowing after a side-congruent olfactory cue is interpreted as a cross-modal attentional interference effect. In addition, habituation might have led to a gradual disappearance of the cueing effect. It is concluded that under immobile conditions with passive monorhinal stimulation, humans are unable to explicitly determine the location of a pure odorant. Implicitly, however, odor localization seems to exert an influence on human behaviour. To our knowledge, these data are the first to show implicit effects of odor localization on overt human behaviour and thus support the hypothesis of residual directional smelling in humans.

  14. Sound arithmetic: auditory cues in the rehabilitation of impaired fact retrieval.

    Science.gov (United States)

    Domahs, Frank; Zamarian, Laura; Delazer, Margarete

    2008-04-01

    The present single case study describes the rehabilitation of an acquired impairment of multiplication fact retrieval. In addition to a conventional drill approach, one set of problems was preceded by auditory cues while the other half was not. After extensive repetition, non-specific improvements could be observed for all trained problems (e.g., 3 * 7) as well as for their non-trained complementary problems (e.g., 7 * 3). Beyond this general improvement, specific therapy effects were found for problems trained with auditory cues. These specific effects were attributed to an involvement of implicit memory systems and/or attentional processes during training. Thus, the present results demonstrate that cues in the training of arithmetic facts do not have to be visual to be effective.

  15. Gay- and Lesbian-Sounding Auditory Cues Elicit Stereotyping and Discrimination.

    Science.gov (United States)

    Fasoli, Fabio; Maass, Anne; Paladino, Maria Paola; Sulpizio, Simone

    2017-07-01

    The growing body of literature on the recognition of sexual orientation from voice ("auditory gaydar") is silent on the cognitive and social consequences of having a gay-/lesbian- versus heterosexual-sounding voice. We investigated this issue in four studies (overall N = 276), conducted in Italian language, in which heterosexual listeners were exposed to single-sentence voice samples of gay/lesbian and heterosexual speakers. In all four studies, listeners were found to make gender-typical inferences about traits and preferences of heterosexual speakers, but gender-atypical inferences about those of gay or lesbian speakers. Behavioral intention measures showed that listeners considered lesbian and gay speakers as less suitable for a leadership position, and male (but not female) listeners took distance from gay speakers. Together, this research demonstrates that having a gay/lesbian rather than heterosexual-sounding voice has tangible consequences for stereotyping and discrimination.

  16. Effects of Active and Passive Hearing Protection Devices on Sound Source Localization, Speech Recognition, and Tone Detection.

    Directory of Open Access Journals (Sweden)

    Andrew D Brown

    Full Text Available Hearing protection devices (HPDs such as earplugs offer to mitigate noise exposure and reduce the incidence of hearing loss among persons frequently exposed to intense sound. However, distortions of spatial acoustic information and reduced audibility of low-intensity sounds caused by many existing HPDs can make their use untenable in high-risk (e.g., military or law enforcement environments where auditory situational awareness is imperative. Here we assessed (1 sound source localization accuracy using a head-turning paradigm, (2 speech-in-noise recognition using a modified version of the QuickSIN test, and (3 tone detection thresholds using a two-alternative forced-choice task. Subjects were 10 young normal-hearing males. Four different HPDs were tested (two active, two passive, including two new and previously untested devices. Relative to unoccluded (control performance, all tested HPDs significantly degraded performance across tasks, although one active HPD slightly improved high-frequency tone detection thresholds and did not degrade speech recognition. Behavioral data were examined with respect to head-related transfer functions measured using a binaural manikin with and without tested HPDs in place. Data reinforce previous reports that HPDs significantly compromise a variety of auditory perceptual facilities, particularly sound localization due to distortions of high-frequency spectral cues that are important for the avoidance of front-back confusions.

  17. Spatial resolution limits for the localization of noise sources using direct sound mapping

    DEFF Research Database (Denmark)

    Comesana, D. Fernandez; Holland, K. R.; Fernandez Grande, Efren

    2016-01-01

    the relationship between spatial resolution, noise level and geometry. The proposed expressions are validated via simulations and experiments. It is shown that particle velocity mapping yields better results for identifying closely spaced sound sources than sound pressure or sound intensity, especially...... extensively been used for many years to locate sound sources. However, it is not yet well defined when two sources should be regarded as resolved by means of direct sound mapping. This paper derives the limits of the direct representation of sound pressure, particle velocity and sound intensity by exploring......One of the main challenges arising from noise and vibration problems is how to identify the areas of a device, machine or structure that produce significant acoustic excitation, i.e. the localization of main noise sources. The direct visualization of sound, in particular sound intensity, has...

  18. Heart Sound Localization and Reduction in Tracheal Sounds by Gabor Time-Frequency Masking

    OpenAIRE

    SAATCI, Esra; Akan, Aydın

    2018-01-01

    Background and aim: Respiratorysounds, i.e. tracheal and lung sounds, have been of great interest due to theirdiagnostic values as well as the potential of their use in the estimation ofthe respiratory dynamics (mainly airflow). Thus the aim of the study is topresent a new method to filter the heart sound interference from the trachealsounds. Materials and methods: Trachealsounds and airflow signals were collected by using an accelerometer from 10 healthysubjects. Tracheal sounds were then pr...

  19. Estimating 3D tilt from local image cues in natural scenes

    OpenAIRE

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then ana...

  20. Enhanced Soundings for Local Coupling Studies Field Campaign Report

    Energy Technology Data Exchange (ETDEWEB)

    Ferguson, Craig R [University at Albany, State University of New York; Santanello, Joseph A [NASA Goddard Space Flight Center (GSFC), Greenbelt, MD (United States); Gentine, Pierre [Columbia Univ., New York, NY (United States)

    2016-04-01

    This document presents initial analyses of the enhanced radiosonde observations obtained during the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility Enhanced Soundings for Local Coupling Studies Field Campaign (ESLCS), which took place at the ARM Southern Great Plains (SGP) Central Facility (CF) from June 15 to August 31, 2015. During ESLCS, routine 4-times-daily radiosonde measurements at the ARM-SGP CF were augmented on 12 days (June 18 and 29; July 11, 14, 19, and 26; August 15, 16, 21, 25, 26, and 27) with daytime 1-hourly radiosondes and 10-minute ‘trailer’ radiosondes every 3 hours. These 12 intensive operational period (IOP) days were selected on the basis of prior-day qualitative forecasts of potential land-atmosphere coupling strength. The campaign captured 2 dry soil convection advantage days (June 29 and July 14) and 10 atmospherically controlled days. Other noteworthy IOP events include: 2 soil dry-down sequences (July 11-14-19 and August 21-25-26), a 2-day clear-sky case (August 15-16), and the passing of Tropical Storm Bill (June 18). To date, the ESLCS data set constitutes the highest-temporal-resolution sampling of the evolution of the daytime planetary boundary layer (PBL) using radiosondes at the ARM-SGP. The data set is expected to contribute to: 1) improved understanding and modeling of the diurnal evolution of the PBL, particularly with regard to the role of local soil wetness, and (2) new insights into the appropriateness of current ARM-SGP CF thermodynamic sampling strategies.

  1. Local spectral anisotropy is a valid cue for figure-ground organization in natural scenes.

    Science.gov (United States)

    Ramenahalli, Sudarshan; Mihalas, Stefan; Niebur, Ernst

    2014-10-01

    An important step in the process of understanding visual scenes is its organization in different perceptual objects which requires figure-ground segregation. The determination of which side of an occlusion boundary is figure (closer to the observer) and which is ground (further away from the observer) is made through a combination of global cues, like convexity, and local cues, like T-junctions. We here focus on a novel set of local cues in the intensity patterns along occlusion boundaries which we show to differ between figure and ground. Image patches are extracted from natural scenes from two standard image sets along the boundaries of objects and spectral analysis is performed separately on figure and ground. On the figure side, oriented spectral power orthogonal to the occlusion boundary significantly exceeds that parallel to the boundary. This "spectral anisotropy" is present only for higher spatial frequencies, and absent on the ground side. The difference in spectral anisotropy between the two sides of an occlusion border predicts which is the figure and which the background with an accuracy exceeding 60% per patch. Spectral anisotropy of close-by locations along the boundary co-varies but is largely independent over larger distances which allows to combine results from different image regions. Given the low cost of this strictly local computation, we propose that spectral anisotropy along occlusion boundaries is a valuable cue for figure-ground segregation. A data base of images and extracted patches labeled for figure and ground is made freely available. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Sound

    CERN Document Server

    Robertson, William C

    2003-01-01

    Muddled about what makes music? Stuck on the study of harmonics? Dumbfounded by how sound gets around? Now you no longer have to struggle to teach concepts you really don t grasp yourself. Sound takes an intentionally light touch to help out all those adults science teachers, parents wanting to help with homework, home-schoolers seeking necessary scientific background to teach middle school physics with confidence. The book introduces sound waves and uses that model to explain sound-related occurrences. Starting with the basics of what causes sound and how it travels, you'll learn how musical instruments work, how sound waves add and subtract, how the human ear works, and even why you can sound like a Munchkin when you inhale helium. Sound is the fourth book in the award-winning Stop Faking It! Series, published by NSTA Press. Like the other popular volumes, it is written by irreverent educator Bill Robertson, who offers this Sound recommendation: One of the coolest activities is whacking a spinning metal rod...

  3. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  4. Perceptual representation and effectiveness of local figure-ground cues in natural contours.

    Science.gov (United States)

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure-ground segregation. Although previous studies have reported local contour features that evoke figure-ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure-ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure-ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure-ground perception with natural contours when the other cues coexist with equal probability including contradictory cases.

  5. Perceptual representation and effectiveness of local figure–ground cues in natural contours

    Science.gov (United States)

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure–ground segregation. Although previous studies have reported local contour features that evoke figure–ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure–ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure–ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure–ground perception with natural contours when the other cues coexist with equal probability including contradictory cases. PMID:26579057

  6. Perceptual Representation and Effectiveness of Local Figure-Ground Cues in Natural Contours

    Directory of Open Access Journals (Sweden)

    Ko eSakai

    2015-11-01

    Full Text Available A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure-ground segregation. Although previous studies have reported local contour features that evoke figure-ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural contour shapes. We performed similarity tests between local contours, and examined the contribution of the contour features to the perceptual similarities between the contours. The local contours were sampled from natural contours so that their distribution was uniform in the space composed of the three contour features. This sampling ensured the equal appearance frequency of the factors and a wide variety of contour shapes including those comprised of contradictory factors that induce figure in the opposite directions. This sampling from natural contours is advantageous in order to randomly pickup a variety of contours that satisfy a wide range of cue combinations. Multidimensional scaling analyses showed that the combinations of convexity, closure, and symmetry contribute to perceptual similarity, thus they are perceptual quantities. Second, we examined whether the three features contribute to local figure-ground perception. We performed psychophysical experiments to judge the direction of the figure along the local contours, and examined the contribution of the features to the figure-ground judgment. Multiple linear regression analyses showed that closure was a significant factor, but that convexity and symmetry were not. These results indicate that closure is dominant in the local figure-ground perception with natural contours when the other cues coexist with equal probability including contradictory cases.

  7. A SOUND SOURCE LOCALIZATION TECHNIQUE TO SUPPORT SEARCH AND RESCUE IN LOUD NOISE ENVIRONMENTS

    Science.gov (United States)

    Yoshinaga, Hiroshi; Mizutani, Koichi; Wakatsuki, Naoto

    At some sites of earthquakes and other disasters, rescuers search for people buried under rubble by listening for the sounds which they make. Thus developing a technique to localize sound sources amidst loud noise will support such search and rescue operations. In this paper, we discuss an experiment performed to test an array signal processing technique which searches for unperceivable sound in loud noise environments. Two speakers simultaneously played a noise of a generator and a voice decreased by 20 dB (= 1/100 of power) from the generator noise at an outdoor space where cicadas were making noise. The sound signal was received by a horizontally set linear microphone array 1.05 m in length and consisting of 15 microphones. The direction and the distance of the voice were computed and the sound of the voice was extracted and played back as an audible sound by array signal processing.

  8. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    .... This thesis presents an integrated auditory system for a humanoid robot, currently under development, that will, among other things, learn to localize normal, everyday sounds in a realistic environment...

  9. Improvement of directionality and sound-localization by internal ear coupling in barn owls

    DEFF Research Database (Denmark)

    Wagner, Hermann; Christensen-Dalsgaard, Jakob; Kettler, Lutz

    Mark Konishi was one of the first to quantify sound-localization capabilities in barn owls. He showed that frequencies between 3 and 10 kHz underlie precise sound localization in these birds, and that they derive spatial information from processing interaural time and interaural level differences....... However, despite intensive research during the last 40 years it is still unclear whether and how internal ear coupling contributes to sound localization in the barn owl. Here we investigated ear directionality in anesthetized birds with the help of laser vibrometry. Care was taken that anesthesia...... time difference in the low-frequency range, barn owls hesitate to approach prey or turn their heads when only low-frequency auditory information is present in a stimulus they receive. Thus, the barn-owl's sound localization system seems to be adapted to work best in frequency ranges where interaural...

  10. Robust Sound Localization: An Application of an Auditory Perception System for a Humanoid Robot

    National Research Council Canada - National Science Library

    Irie, Robert E

    1995-01-01

    Localizing sounds with different frequency and time domain characteristics in a dynamic listening environment is a challenging task that has not been explored in the field of robotics as much as other perceptual tasks...

  11. Effects of multiple congruent cues on concurrent sound segregation during passive and active listening: an event-related potential (ERP) study.

    Science.gov (United States)

    Kocsis, Zsuzsanna; Winkler, István; Szalárdy, Orsolya; Bendixen, Alexandra

    2014-07-01

    In two experiments, we assessed the effects of combining different cues of concurrent sound segregation on the object-related negativity (ORN) and the P400 event-related potential components. Participants were presented with sequences of complex tones, half of which contained some manipulation: one or two harmonic partials were mistuned, delayed, or presented from a different location than the rest. In separate conditions, one, two, or three of these manipulations were combined. Participants watched a silent movie (passive listening) or reported after each tone whether they perceived one or two concurrent sounds (active listening). ORN was found in almost all conditions except for location difference alone during passive listening. Combining several cues or manipulating more than one partial consistently led to sub-additive effects on the ORN amplitude. These results support the view that ORN reflects a combined, feature-unspecific assessment of the auditory system regarding the contribution of two sources to the incoming sound. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. The effect of brain lesions on sound localization in complex acoustic environments.

    Science.gov (United States)

    Zündorf, Ida C; Karnath, Hans-Otto; Lewald, Jörg

    2014-05-01

    Localizing sound sources of interest in cluttered acoustic environments--as in the 'cocktail-party' situation--is one of the most demanding challenges to the human auditory system in everyday life. In this study, stroke patients' ability to localize acoustic targets in a single-source and in a multi-source setup in the free sound field were directly compared. Subsequent voxel-based lesion-behaviour mapping analyses were computed to uncover the brain areas associated with a deficit in localization in the presence of multiple distracter sound sources rather than localization of individually presented sound sources. Analyses revealed a fundamental role of the right planum temporale in this task. The results from the left hemisphere were less straightforward, but suggested an involvement of inferior frontal and pre- and postcentral areas. These areas appear to be particularly involved in the spectrotemporal analyses crucial for effective segregation of multiple sound streams from various locations, beyond the currently known network for localization of isolated sound sources in otherwise silent surroundings.

  13. Statistics of natural binaural sounds.

    Directory of Open Access Journals (Sweden)

    Wiktor Młynarski

    Full Text Available Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD and level (ILD disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA. Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  14. Statistics of natural binaural sounds.

    Science.gov (United States)

    Młynarski, Wiktor; Jost, Jürgen

    2014-01-01

    Binaural sound localization is usually considered a discrimination task, where interaural phase (IPD) and level (ILD) disparities at narrowly tuned frequency channels are utilized to identify a position of a sound source. In natural conditions however, binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. Distribution of cues encountered naturally and their dependence on physical properties of an auditory scene have not been studied before. In the present work we analyzed statistics of naturally encountered binaural sounds. We performed binaural recordings of three auditory scenes with varying spatial configuration and analyzed empirical cue distributions from each scene. We have found that certain properties such as the spread of IPD distributions as well as an overall shape of ILD distributions do not vary strongly between different auditory scenes. Moreover, we found that ILD distributions vary much weaker across frequency channels and IPDs often attain much higher values, than can be predicted from head filtering properties. In order to understand the complexity of the binaural hearing task in the natural environment, sound waveforms were analyzed by performing Independent Component Analysis (ICA). Properties of learned basis functions indicate that in natural conditions soundwaves in each ear are predominantly generated by independent sources. This implies that the real-world sound localization must rely on mechanisms more complex than a mere cue extraction.

  15. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information

    Directory of Open Access Journals (Sweden)

    Fabian Draht

    2017-06-01

    Full Text Available Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  16. Experience-Dependency of Reliance on Local Visual and Idiothetic Cues for Spatial Representations Created in the Absence of Distal Information.

    Science.gov (United States)

    Draht, Fabian; Zhang, Sijie; Rayan, Abdelrahman; Schönfeld, Fabian; Wiskott, Laurenz; Manahan-Vaughan, Denise

    2017-01-01

    Spatial encoding in the hippocampus is based on a range of different input sources. To generate spatial representations, reliable sensory cues from the external environment are integrated with idiothetic cues, derived from self-movement, that enable path integration and directional perception. In this study, we examined to what extent idiothetic cues significantly contribute to spatial representations and navigation: we recorded place cells while rodents navigated towards two visually identical chambers in 180° orientation via two different paths in darkness and in the absence of reliable auditory or olfactory cues. Our goal was to generate a conflict between local visual and direction-specific information, and then to assess which strategy was prioritized in different learning phases. We observed that, in the absence of distal cues, place fields are initially controlled by local visual cues that override idiothetic cues, but that with multiple exposures to the paradigm, spaced at intervals of days, idiothetic cues become increasingly implemented in generating an accurate spatial representation. Taken together, these data support that, in the absence of distal cues, local visual cues are prioritized in the generation of context-specific spatial representations through place cells, whereby idiothetic cues are deemed unreliable. With cumulative exposures to the environments, the animal learns to attend to subtle idiothetic cues to resolve the conflict between visual and direction-specific information.

  17. Mutation in the kv3.3 voltage-gated potassium channel causing spinocerebellar ataxia 13 disrupts sound-localization mechanisms.

    Directory of Open Access Journals (Sweden)

    John C Middlebrooks

    Full Text Available Normal sound localization requires precise comparisons of sound timing and pressure levels between the two ears. The primary localization cues are interaural time differences, ITD, and interaural level differences, ILD. Voltage-gated potassium channels, including Kv3.3, are highly expressed in the auditory brainstem and are thought to underlie the exquisite temporal precision and rapid spike rates that characterize brainstem binaural pathways. An autosomal dominant mutation in the gene encoding Kv3.3 has been demonstrated in a large Filipino kindred manifesting as spinocerebellar ataxia type 13 (SCA13. This kindred provides a rare opportunity to test in vivo the importance of a specific channel subunit for human hearing. Here, we demonstrate psychophysically that individuals with the mutant allele exhibit profound deficits in both ITD and ILD sensitivity, despite showing no obvious impairment in pure-tone sensitivity with either ear. Surprisingly, several individuals exhibited the auditory deficits even though they were pre-symptomatic for SCA13. We would expect that impairments of binaural processing as great as those observed in this family would result in prominent deficits in localization of sound sources and in loss of the "spatial release from masking" that aids in understanding speech in the presence of competing sounds.

  18. Sound localization and speech identification in the frontal median plane with a hear-through headset

    DEFF Research Database (Denmark)

    Hoffmann, Pablo F.; Møller, Anders Kalsgaard; Christensen, Flemming

    2014-01-01

    signals can be superimposed via earphone reproduction. An important aspect of the hear-through headset is its transparency, i.e. how close to real life can the electronically amplied sounds be perceived. Here we report experiments conducted to evaluate the auditory transparency of a hear-through headset...... prototype by comparing human performance in natural, hear-through, and fully occluded conditions for two spatial tasks: frontal vertical-plane sound localization and speech-on-speech spatial release from masking. Results showed that localization performance was impaired by the hear-through headset relative...... to the natural condition though not as much as in the fully occluded condition. Localization was affected the least when the sound source was in front of the listeners. Different from the vertical localization performance, results from the speech task suggest that normal speech-on-speech spatial release from...

  19. Analysis, Design and Implementation of an Embedded Realtime Sound Source Localization System Based on Beamforming Theory

    Directory of Open Access Journals (Sweden)

    Arko Djajadi

    2009-12-01

    Full Text Available This project is intended to analyze, design and implement a realtime sound source localization system by using a mobile robot as the media. The implementated system uses 2 microphones as the sensors, Arduino Duemilanove microcontroller system with ATMega328p as the microprocessor, two permanent magnet DC motors as the actuators for the mobile robot and a servo motor as the actuator to rotate the webcam directing to the location of the sound source, and a laptop/PC as the simulation and display media. In order to achieve the objective of finding the position of a specific sound source, beamforming theory is applied to the system. Once the location of the sound source is detected and determined, the choice is either the mobile robot will adjust its position according to the direction of the sound source or only webcam will rotate in the direction of the incoming sound simulating the use of this system in a video conference. The integrated system has been tested and the results show the system could localize in realtime a sound source placed randomly on a half circle area (0 - 1800 with a radius of 0.3m - 3m, assuming the system is the center point of the circle. Due to low ADC and processor speed, achievable best angular resolution is still limited to 25o.

  20. An Algorithm for the Accurate Localization of Sounds

    National Research Council Canada - National Science Library

    MacDonald, Justin A

    2005-01-01

    .... The algorithm requires no a priori knowledge of the stimuli to be localized. The accuracy of the algorithm was tested using binaural recordings from a pair of microphones mounted in the ear canals of an acoustic mannequin...

  1. Maximum likelihood approach to “informed” Sound Source Localization for Hearing Aid applications

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    Most state-of-the-art Sound Source Localization (SSL) algorithms have been proposed for applications which are "uninformed'' about the target sound content; however, utilizing a wireless microphone worn by a target talker, enables recent Hearing Aid Systems (HASs) to access to an almost noise......-free sound signal of the target talker at the HAS via the wireless connection. Therefore, in this paper, we propose a maximum likelihood (ML) approach, which we call MLSSL, to estimate the Direction of Arrival (DoA) of the target signal given access to the target signal content. Compared with other "informed...

  2. The invisible cues that guide king penguin chicks home: use of magnetic and acoustic cues during orientation and short-range navigation.

    Science.gov (United States)

    Nesterova, Anna P; Chiffard, Jules; Couchoux, Charline; Bonadonna, Francesco

    2013-04-15

    King penguins (Aptenodytes patagonicus) live in large and densely populated colonies, where navigation can be challenging because of the presence of many conspecifics that could obstruct locally available cues. Our previous experiments demonstrated that visual cues were important but not essential for king penguin chicks' homing. The main objective of this study was to investigate the importance of non-visual cues, such as magnetic and acoustic cues, for chicks' orientation and short-range navigation. In a series of experiments, the chicks were individually displaced from the colony to an experimental arena where they were released under different conditions. In the magnetic experiments, a strong magnet was attached to the chicks' heads. Trials were conducted in daylight and at night to test the relative importance of visual and magnetic cues. Our results showed that when the geomagnetic field around the chicks was modified, their orientation in the arena and the overall ability to home was not affected. In a low sound experiment we limited the acoustic cues available to the chicks by putting ear pads over their ears, and in a loud sound experiment we provided additional acoustic cues by broadcasting colony sounds on the opposite side of the arena to the real colony. In the low sound experiment, the behavior of the chicks was not affected by the limited sound input. In the loud sound experiment, the chicks reacted strongly to the colony sound. These results suggest that king penguin chicks may use the sound of the colony while orienting towards their home.

  3. Prior Visual Experience Modulates Learning of Sound Localization Among Blind Individuals.

    Science.gov (United States)

    Tao, Qian; Chan, Chetwyn C H; Luo, Yue-Jia; Li, Jian-Jun; Ting, Kin-Hung; Lu, Zhong-Lin; Whitfield-Gabrieli, Susan; Wang, Jun; Lee, Tatia M C

    2017-05-01

    Cross-modal learning requires the use of information from different sensory modalities. This study investigated how the prior visual experience of late blind individuals could modulate neural processes associated with learning of sound localization. Learning was realized by standardized training on sound localization processing, and experience was investigated by comparing brain activations elicited from a sound localization task in individuals with (late blind, LB) and without (early blind, EB) prior visual experience. After the training, EB showed decreased activation in the precuneus, which was functionally connected to a limbic-multisensory network. In contrast, LB showed the increased activation of the precuneus. A subgroup of LB participants who demonstrated higher visuospatial working memory capabilities (LB-HVM) exhibited an enhanced precuneus-lingual gyrus network. This differential connectivity suggests that visuospatial working memory due to the prior visual experience gained via LB-HVM enhanced learning of sound localization. Active visuospatial navigation processes could have occurred in LB-HVM compared to the retrieval of previously bound information from long-term memory for EB. The precuneus appears to play a crucial role in learning of sound localization, disregarding prior visual experience. Prior visual experience, however, could enhance cross-modal learning by extending binding to the integration of unprocessed information, mediated by the cognitive functions that these experiences develop.

  4. Sound localization in noise in hearing-impaired listeners.

    Science.gov (United States)

    Lorenzi, C; Gatehouse, S; Lever, C

    1999-06-01

    The present study assesses the ability of four listeners with high-frequency, bilateral symmetrical sensorineural hearing loss to localize and detect a broadband click train in the frontal-horizontal plane, in quiet and in the presence of a white noise. The speaker array and stimuli are identical to those described by Lorenzi et al. (in press). The results show that: (1) localization performance is only slightly poorer in hearing-impaired listeners than in normal-hearing listeners when noise is at 0 deg azimuth, (2) localization performance begins to decrease at higher signal-to-noise ratios for hearing-impaired listeners than for normal-hearing listeners when noise is at +/- 90 deg azimuth, and (3) the performance of hearing-impaired listeners is less consistent when noise is at +/- 90 deg azimuth than at 0 deg azimuth. The effects of a high-frequency hearing loss were also studied by measuring the ability of normal-hearing listeners to localize the low-pass filtered version of the clicks. The data reproduce the effects of noise on three out of the four hearing-impaired listeners when noise is at 0 deg azimuth. They reproduce the effects of noise on only two out of the four hearing-impaired listeners when noise is at +/- 90 deg azimuth. The additional effects of a low-frequency hearing loss were investigated by attenuating the low-pass filtered clicks and the noise by 20 dB. The results show that attenuation does not strongly affect localization accuracy for normal-hearing listeners. Measurements of the clicks' detectability indicate that the hearing-impaired listeners who show the poorest localization accuracy also show the poorest ability to detect the clicks. The inaudibility of high frequencies, "distortions," and reduced detectability of the signal are assumed to have caused the poorer-than-normal localization accuracy for hearing-impaired listeners.

  5. Nonlinear dynamics of human locomotion: effects of rhythmic auditory cueing on local dynamic stability

    Directory of Open Access Journals (Sweden)

    Philippe eTerrier

    2013-09-01

    Full Text Available It has been observed that times series of gait parameters (stride length (SL, stride time (ST and stride speed (SS, exhibit long-term persistence and fractal-like properties. Synchronizing steps with rhythmic auditory stimuli modifies the persistent fluctuation pattern to anti-persistence. Another nonlinear method estimates the degree of resilience of gait control to small perturbations, i.e. the local dynamic stability (LDS. The method makes use of the maximal Lyapunov exponent, which estimates how fast a nonlinear system embedded in a reconstructed state space (attractor diverges after an infinitesimal perturbation. We propose to use an instrumented treadmill to simultaneously measure basic gait parameters (time series of SL, ST and SS from which the statistical persistence among consecutive strides can be assessed, and the trajectory of the center of pressure (from which the LDS can be estimated. In 20 healthy participants, the response to rhythmic auditory cueing (RAC of LDS and of statistical persistence (assessed with detrended fluctuation analysis (DFA was compared. By analyzing the divergence curves, we observed that long-term LDS (computed as the reverse of the average logarithmic rate of divergence between the 4th and the 10th strides downstream from nearest neighbors in the reconstructed attractor was strongly enhanced (relative change +47%. That is likely the indication of a more dampened dynamics. The change in short-term LDS (divergence over one step was smaller (+3%. DFA results (scaling exponents confirmed an anti-persistent pattern in ST, SL and SS. Long-term LDS (but not short-term LDS and scaling exponents exhibited a significant correlation between them (r=0.7. Both phenomena probably result from the more conscious/voluntary gait control that is required by RAC. We suggest that LDS and statistical persistence should be used to evaluate the efficiency of cueing therapy in patients with neurological gait disorders.

  6. Hybrid local piezoelectric and conductive functions for high performance airborne sound absorption

    Science.gov (United States)

    Rahimabady, Mojtaba; Statharas, Eleftherios Christos; Yao, Kui; Sharifzadeh Mirshekarloo, Meysam; Chen, Shuting; Tay, Francis Eng Hock

    2017-12-01

    A concept of hybrid local piezoelectric and electrical conductive functions for improving airborne sound absorption is proposed and demonstrated in composite foam made of porous polar polyvinylidene fluoride (PVDF) mixed with conductive single-walled carbon nanotube (SWCNT). According to our hybrid material function design, the local piezoelectric effect in the PVDF matrix with the polar structure and the electrical resistive loss of SWCNT enhanced sound energy conversion to electrical energy and subsequently to thermal energy, respectively, in addition to the other known sound absorption mechanisms in a porous material. It is found that the overall energy conversion and hence the sound absorption performance are maximized when the concentration of the SWCNT is around the conductivity percolation threshold. For the optimal composition of PVDF/5 wt. % SWCNT, a sound reduction coefficient of larger than 0.58 has been obtained, with a high sound absorption coefficient higher than 50% at 600 Hz, showing their great values for passive noise mitigation even at a low frequency.

  7. Physiological correlates of sound localization in a parasitoid fly, Ormia ochracea

    Science.gov (United States)

    Oshinsky, Michael Lee

    A major focus of research in the nervous system is the investigation of neural circuits. The question of how neurons connect to form functional units has driven modern neuroscience research from its inception. From the beginning, the neural circuits of the auditory system and specifically sound localization were used as a model system for investigating neural connectivity and computation. Sound localization lends itself to this task because there is no mapping of spatial information on a receptor sheet as in vision. With only one eye, an animal would still have positional information for objects. Since the receptor sheet in the ear is frequency oriented and not spatially oriented, positional information for a sound source does not exist with only one ear. The nervous system computes the location of a sound source based on differences in the physiology of the two ears. In this study, I investigated the neural circuits for sound localization in a fly, Ormia ochracea (Diptera, Tachinidae, Ormiini), which is a parasitoid of crickets. This fly possess a unique mechanically coupled hearing organ. The two ears are contained in one air sac and a cuticular bridge, that has a flexible spring-like structure at its center, connects them. This mechanical coupling preprocesses the sound before it is detected by the nervous system and provides the fly with directional information. The subject of this study is the neural coding of the location of sound stimuli by a mechanically coupled auditory system. In chapter 1, I present the natural history of an acoustic parasitoid and I review the peripheral processing of sound by the Ormian ear. In chapter 2, I describe the anatomy and physiology of the auditory afferents. I present this physiology in the context of sound localization. In chapter 3, I describe the directional dependent physiology for the thoracic local and ascending acoustic interneurons. In chapter 4, I quantify the threshold and I detail the kinematics of the phonotactic

  8. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    International Nuclear Information System (INIS)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang; Kim, Dai Jin

    2011-01-01

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection

  9. Localization of self-generated synthetic footstep sounds on different walked-upon materials through headphones

    DEFF Research Database (Denmark)

    Turchet, Luca; Spagnol, Simone; Geronazzo, Michele

    2016-01-01

    typologies of surface materials: solid (e.g., wood) and aggregate (e.g., gravel). Different sound delivery methods (mono, stereo, binaural) as well as several surface materials, in presence or absence of concurrent contextual auditory information provided as soundscapes, were evaluated in a vertical...... localization task. Results showed that solid surfaces were localized significantly farther from the walker's feet than the aggregate ones. This effect was independent of the used rendering technique, of the presence of soundscapes, and of merely temporal or spectral attributes of sound. The effect...

  10. Audio-Visual Fusion for Sound Source Localization and Improved Attention

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Byoung Gi; Choi, Jong Suk; Yoon, Sang Suk; Choi, Mun Taek; Kim, Mun Sang [Korea Institute of Science and Technology, Daejeon (Korea, Republic of); Kim, Dai Jin [Pohang University of Science and Technology, Pohang (Korea, Republic of)

    2011-07-15

    Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

  11. Evolution of Sound Source Localization Circuits in the Nonmammalian Vertebrate Brainstem

    DEFF Research Database (Denmark)

    Walton, Peggy L; Christensen-Dalsgaard, Jakob; Carr, Catherine E

    2017-01-01

    The earliest vertebrate ears likely subserved a gravistatic function for orientation in the aquatic environment. However, in addition to detecting acceleration created by the animal's own movements, the otolithic end organs that detect linear acceleration would have responded to particle movement...... to increased sensitivity to a broader frequency range and to modification of the preexisting circuitry for sound source localization....

  12. 3-D inversion of airborne electromagnetic data parallelized and accelerated by local mesh and adaptive soundings

    Science.gov (United States)

    Yang, Dikun; Oldenburg, Douglas W.; Haber, Eldad

    2014-03-01

    Airborne electromagnetic (AEM) methods are highly efficient tools for assessing the Earth's conductivity structures in a large area at low cost. However, the configuration of AEM measurements, which typically have widely distributed transmitter-receiver pairs, makes the rigorous modelling and interpretation extremely time-consuming in 3-D. Excessive overcomputing can occur when working on a large mesh covering the entire survey area and inverting all soundings in the data set. We propose two improvements. The first is to use a locally optimized mesh for each AEM sounding for the forward modelling and calculation of sensitivity. This dedicated local mesh is small with fine cells near the sounding location and coarse cells far away in accordance with EM diffusion and the geometric decay of the signals. Once the forward problem is solved on the local meshes, the sensitivity for the inversion on the global mesh is available through quick interpolation. Using local meshes for AEM forward modelling avoids unnecessary computing on fine cells on a global mesh that are far away from the sounding location. Since local meshes are highly independent, the forward modelling can be efficiently parallelized over an array of processors. The second improvement is random and dynamic down-sampling of the soundings. Each inversion iteration only uses a random subset of the soundings, and the subset is reselected for every iteration. The number of soundings in the random subset, determined by an adaptive algorithm, is tied to the degree of model regularization. This minimizes the overcomputing caused by working with redundant soundings. Our methods are compared against conventional methods and tested with a synthetic example. We also invert a field data set that was previously considered to be too large to be practically inverted in 3-D. These examples show that our methodology can dramatically reduce the processing time of 3-D inversion to a practical level without losing resolution

  13. Crowing Sound Analysis of Gaga' Chicken; Local Chicken from South Sulawesi Indonesia

    OpenAIRE

    Aprilita Bugiwati, Sri Rachma; Ashari, Fachri

    2008-01-01

    Gaga??? chicken was known as a local chicken at South Sulawesi Indonesia which has unique, specific, and different crowing sound, especially at the ending of crowing sound which is like the voice character of human laughing, comparing with the other types of singing chicken in the world. 287 birds of Gaga??? chicken at 3 districts at the centre habitat of Gaga??? chicken were separated into 2 groups (163 birds of Dangdut type and 124 birds of Slow type) which is based on the speed...

  14. A novel method for direct localized sound speed measurement using the virtual source paradigm

    DEFF Research Database (Denmark)

    Byram, Brett; Trahey, Gregg E.; Jensen, Jørgen Arendt

    2007-01-01

    ) mediums. The inhomogeneous mediums were arranged as an oil layer, one 6 mm thick and the other 11 mm thick, on top of a water layer. To complement the phantom studies, sources of error for spatial registration of virtual detectors were simulated. The sources of error presented here are multiple sound...... registered virtual detector. Between a pair of registered virtual detectors a spherical wave is propagated. By beamforming the received data the time of flight between the two virtual sources can be calculated. From this information the local sound speed can be estimated. Validation of the estimator used...... both phantom and simulation results. The phantom consisted of two wire targets located near the transducer's axis at depths of 17 and 28 mm. Using this phantom the sound speed between the wires was measured for a homogeneous (water) medium and for two inhomogeneous (DB-grade castor oil and water...

  15. Lung sound analysis helps localize airway inflammation in patients with bronchial asthma

    Directory of Open Access Journals (Sweden)

    Shimoda T

    2017-03-01

    sound recordings could be used to identify sites of local airway inflammation. Keywords: airway obstruction, expiration sound pressure level, inspiration sound pressure level, expiration-to-inspiration sound pressure ratio, 7-point analysis

  16. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments

    Directory of Open Access Journals (Sweden)

    Kotaro Hoshiba

    2017-11-01

    Full Text Available In search and rescue activities, unmanned aerial vehicles (UAV should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  17. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments †

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G.

    2017-01-01

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators. PMID:29099790

  18. Design of UAV-Embedded Microphone Array System for Sound Source Localization in Outdoor Environments.

    Science.gov (United States)

    Hoshiba, Kotaro; Washizaki, Kai; Wakabayashi, Mizuho; Ishiki, Takahiro; Kumon, Makoto; Bando, Yoshiaki; Gabriel, Daniel; Nakadai, Kazuhiro; Okuno, Hiroshi G

    2017-11-03

    In search and rescue activities, unmanned aerial vehicles (UAV) should exploit sound information to compensate for poor visual information. This paper describes the design and implementation of a UAV-embedded microphone array system for sound source localization in outdoor environments. Four critical development problems included water-resistance of the microphone array, efficiency in assembling, reliability of wireless communication, and sufficiency of visualization tools for operators. To solve these problems, we developed a spherical microphone array system (SMAS) consisting of a microphone array, a stable wireless network communication system, and intuitive visualization tools. The performance of SMAS was evaluated with simulated data and a demonstration in the field. Results confirmed that the SMAS provides highly accurate localization, water resistance, prompt assembly, stable wireless communication, and intuitive information for observers and operators.

  19. Localization of Simultaneous Moving Sound Sources for Mobile Robot Using a Frequency-Domain Steered Beamformer Approach

    OpenAIRE

    Valin, Jean-Marc; Michaud, François; Hadjou, Brahim; Rouat, Jean

    2016-01-01

    Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabilities such as speech recognition. In this paper we present a robust sound source localization method in three-dimensional space using an array of 8 microphones. The method is based on a frequency-domain implementation of a steered...

  20. Single-sided deafness & directional hearing: contribution of spectral cues and high-frequency hearing loss in the hearing ear

    Directory of Open Access Journals (Sweden)

    Martijn Johannes Hermanus Agterberg

    2014-07-01

    Full Text Available Direction-specific interactions of sound waves with the head, torso and pinna provide unique spectral-shape cues that are used for the localization of sounds in the vertical plane, whereas horizontal sound localization is based primarily on the processing of binaural acoustic differences in arrival time (interaural time differences, or ITDs and sound level (interaural level differences, or ILDs. Because the binaural sound-localization cues are absent in listeners with total single-sided deafness (SSD, their ability to localize sound is heavily impaired. However, some studies have reported that SSD listeners are able, to some extent, to localize sound sources in azimuth, although the underlying mechanisms used for localization are unclear. To investigate whether SSD listeners rely on monaural pinna-induced spectral-shape cues of their hearing ear for directional hearing, we investigated localization performance for low-pass filtered (LP, 3 kHz and broadband (BB, 0.5 – 20 kHz noises in the two-dimensional frontal hemifield. We tested whether localization performance of SSD listeners further deteriorated when the pinna cavities of their hearing ear were filled with a mold that disrupted their spectral-shape cues. To remove the potential use of perceived sound level as an invalid azimuth cue, we randomly varied stimulus presentation levels over a broad range (45-65 dB SPL. Several listeners with SSD could localize HP and BB sound sources in the horizontal plane, but inter-subject variability was considerable. Localization performance of these listeners strongly reduced after diminishing of their spectral pinna-cues. We further show that inter-subject variability of SSD can be explained to a large extent by the severity of high-frequency hearing loss in their hearing ear.

  1. Near-Field Sound Localization Based on the Small Profile Monaural Structure

    Directory of Open Access Journals (Sweden)

    Youngwoong Kim

    2015-11-01

    Full Text Available The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body.

  2. Sound localization and word discrimination in reverberant environment in children with developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Wendy Castro-Camacho

    2015-04-01

    Full Text Available Objective Compare if localization of sounds and words discrimination in reverberant environment is different between children with dyslexia and controls. Method We studied 30 children with dyslexia and 30 controls. Sound and word localization and discrimination was studied in five angles from left to right auditory fields (-90o, -45o, 0o, +45o, +90o, under reverberant and no-reverberant conditions; correct answers were compared. Results Spatial location of words in no-reverberant test was deficient in children with dyslexia at 0º and +90o. Spatial location for reverberant test was altered in children with dyslexia at all angles, except –-90o. Word discrimination in no-reverberant test in children with dyslexia had a poor performance at left angles. In reverberant test, children with dyslexia exhibited deficiencies at -45o, -90o, and +45o angles. Conclusion Children with dyslexia could had problems when have to locate sound, and discriminate words in extreme locations of the horizontal plane in classrooms with reverberation.

  3. ICE on the road to auditory sensitivity reduction and sound localization in the frog.

    Science.gov (United States)

    Narins, Peter M

    2016-10-01

    Frogs and toads are capable of producing calls at potentially damaging levels that exceed 110 dB SPL at 50 cm. Most frog species have internally coupled ears (ICE) in which the tympanic membranes (TyMs) communicate directly via the large, permanently open Eustachian tubes, resulting in an inherently directional asymmetrical pressure-difference receiver. One active mechanism for auditory sensitivity reduction involves the pressure increase during vocalization that distends the TyM, reducing its low-frequency airborne sound sensitivity. Moreover, if sounds generated by the vocal folds arrive at both surfaces of the TyM with nearly equal amplitudes and phases, the net motion of the eardrum would be greatly attenuated. Both of these processes appear to reduce the motion of the frog's TyM during vocalizations. The implications of ICE in amphibians with respect to sound localizations are discussed, and the particularly interesting case of frogs that use ultrasound for communication yet exhibit exquisitely small localization jump errors is brought to light.

  4. On the influence of microphone array geometry on HRTF-based Sound Source Localization

    DEFF Research Database (Denmark)

    Farmani, Mojtaba; Pedersen, Michael Syskind; Tan, Zheng-Hua

    2015-01-01

    The direction dependence of Head Related Transfer Functions (HRTFs) forms the basis for HRTF-based Sound Source Localization (SSL) algorithms. In this paper, we show how spectral similarities of the HRTFs of different directions in the horizontal plane influence performance of HRTF-based SSL...... algorithms; the more similar the HRTFs of different angles to the HRTF of the target angle, the worse the performance. However, we also show how the microphone array geometry can assist in differentiating between the HRTFs of the different angles, thereby improving performance of HRTF-based SSL algorithms....... Furthermore, to demonstrate the analysis results, we show the impact of HRTFs similarities and microphone array geometry on an exemplary HRTF-based SSL algorithm, called MLSSL. This algorithm is well-suited for this purpose as it allows to estimate the Direction-of-Arrival (DoA) of the target sound using any...

  5. Ormiaochracea as a Model Organism in Sound Localization Experiments and in Inventing Hearing Aids.

    Directory of Open Access Journals (Sweden)

    - -

    1998-09-01

    Full Text Available Hearing aid prescription for patients suffering hearing loss has always been one of the main concerns of the audiologists. Thanks to technology that has provided Hearing aids with digital and computerized systems which has improved the quality of sound heard by hearing aids. Though we can learn from nature in inventing such instruments as in the current article that has been channeled to a kind of fruit fly. Ormiaochracea is a small yellow nocturnal fly, a parasitoid of crickets. It is notable because of its exceptionally acute directional hearing. In the current article we will discuss how it has become a model organism in sound localization experiments and in inventing hearing aids.

  6. Intercepting a sound without vision

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2017-01-01

    Visual information is extremely important to generate internal spatial representations. In the auditory modality, the absence of visual cues during early infancy does not preclude the development of some spatial strategies. However, specific spatial abilities might result impaired. In the current study, we investigated the effect of early visual deprivation on the ability to localize static and moving auditory stimuli by comparing sighted and early blind individuals’ performance in different spatial tasks. We also examined perceptual stability in the two groups of participants by matching localization accuracy in a static and a dynamic head condition that involved rotational head movements. Sighted participants accurately localized static and moving sounds. Their localization ability remained unchanged after rotational movements of the head. Conversely, blind participants showed a leftward bias during the localization of static sounds and a little bias for moving sounds. Moreover, head movements induced a significant bias in the direction of head motion during the localization of moving sounds. These results suggest that internal spatial representations might be body-centered in blind individuals and that in sighted people the availability of visual cues during early infancy may affect sensory-motor interactions. PMID:28481939

  7. Mice Lacking the Alpha9 Subunit of the Nicotinic Acetylcholine Receptor Exhibit Deficits in Frequency Difference Limens and Sound Localization

    Directory of Open Access Journals (Sweden)

    Amanda Clause

    2017-06-01

    Full Text Available Sound processing in the cochlea is modulated by cholinergic efferent axons arising from medial olivocochlear neurons in the brainstem. These axons contact outer hair cells in the mature cochlea and inner hair cells during development and activate nicotinic acetylcholine receptors composed of α9 and α10 subunits. The α9 subunit is necessary for mediating the effects of acetylcholine on hair cells as genetic deletion of the α9 subunit results in functional cholinergic de-efferentation of the cochlea. Cholinergic modulation of spontaneous cochlear activity before hearing onset is important for the maturation of central auditory circuits. In α9KO mice, the developmental refinement of inhibitory afferents to the lateral superior olive is disturbed, resulting in decreased tonotopic organization of this sound localization nucleus. In this study, we used behavioral tests to investigate whether the circuit anomalies in α9KO mice correlate with sound localization or sound frequency processing. Using a conditioned lick suppression task to measure sound localization, we found that three out of four α9KO mice showed impaired minimum audible angles. Using a prepulse inhibition of the acoustic startle response paradigm, we found that the ability of α9KO mice to detect sound frequency changes was impaired, whereas their ability to detect sound intensity changes was not. These results demonstrate that cholinergic, nicotinic α9 subunit mediated transmission in the developing cochlear plays an important role in the maturation of hearing.

  8. Sound Source Localization through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network.

    Science.gov (United States)

    Beck, Christoph; Garreau, Guillaume; Georgiou, Julius

    2016-01-01

    Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  9. Sound Source Localization Through 8 MEMS Microphones Array Using a Sand-Scorpion-Inspired Spiking Neural Network

    Directory of Open Access Journals (Sweden)

    Christoph Beck

    2016-10-01

    Full Text Available Sand-scorpions and many other arachnids perceive their environment by using their feet to sense ground waves. They are able to determine amplitudes the size of an atom and locate the acoustic stimuli with an accuracy of within 13° based on their neuronal anatomy. We present here a prototype sound source localization system, inspired from this impressive performance. The system presented utilizes custom-built hardware with eight MEMS microphones, one for each foot, to acquire the acoustic scene, and a spiking neural model to localize the sound source. The current implementation shows smaller localization error than those observed in nature.

  10. Towards a Synesthesia Laboratory: Real-time Localization and Visualization of a Sound Source for Virtual Reality Applications

    OpenAIRE

    Kose, Ahmet; Tepljakov, Aleksei; Astapov, Sergei; Draheim, Dirk; Petlenkov, Eduard; Vassiljeva, Kristina

    2018-01-01

    In this paper, we present our findings related to the problem of localization and visualization of a sound source placed in the same room as the listener. The particular effect that we aim to investigate is called synesthesia—the act of experiencing one sense modality as another, e.g., a person may vividly experience flashes of colors when listening to a series of sounds. Towards that end, we apply a series of recently developed methods for detecting sound source in a three-dimensional space ...

  11. The role of envelope shape in the localization of multiple sound sources and echoes in the barn owl.

    Science.gov (United States)

    Baxter, Caitlin S; Nelson, Brian S; Takahashi, Terry T

    2013-02-01

    Echoes and sounds of independent origin often obscure sounds of interest, but echoes can go undetected under natural listening conditions, a perception called the precedence effect. How does the auditory system distinguish between echoes and independent sources? To investigate, we presented two broadband noises to barn owls (Tyto alba) while varying the similarity of the sounds' envelopes. The carriers of the noises were identical except for a 2- or 3-ms delay. Their onsets and offsets were also synchronized. In owls, sound localization is guided by neural activity on a topographic map of auditory space. When there are two sources concomitantly emitting sounds with overlapping amplitude spectra, space map neurons discharge when the stimulus in their receptive field is louder than the one outside it and when the averaged amplitudes of both sounds are rising. A model incorporating these features calculated the strengths of the two sources' representations on the map (B. S. Nelson and T. T. Takahashi; Neuron 67: 643-655, 2010). The target localized by the owls could be predicted from the model's output. The model also explained why the echo is not localized at short delays: when envelopes are similar, peaks in the leading sound mask corresponding peaks in the echo, weakening the echo's space map representation. When the envelopes are dissimilar, there are few or no corresponding peaks, and the owl localizes whichever source is predicted by the model to be less masked. Thus the precedence effect in the owl is a by-product of a mechanism for representing multiple sound sources on its map.

  12. A Survey of Sound Source Localization Methods in Wireless Acoustic Sensor Networks

    Directory of Open Access Journals (Sweden)

    Maximo Cobos

    2017-01-01

    Full Text Available Wireless acoustic sensor networks (WASNs are formed by a distributed group of acoustic-sensing devices featuring audio playing and recording capabilities. Current mobile computing platforms offer great possibilities for the design of audio-related applications involving acoustic-sensing nodes. In this context, acoustic source localization is one of the application domains that have attracted the most attention of the research community along the last decades. In general terms, the localization of acoustic sources can be achieved by studying energy and temporal and/or directional features from the incoming sound at different microphones and using a suitable model that relates those features with the spatial location of the source (or sources of interest. This paper reviews common approaches for source localization in WASNs that are focused on different types of acoustic features, namely, the energy of the incoming signals, their time of arrival (TOA or time difference of arrival (TDOA, the direction of arrival (DOA, and the steered response power (SRP resulting from combining multiple microphone signals. Additionally, we discuss methods not only aimed at localizing acoustic sources but also designed to locate the nodes themselves in the network. Finally, we discuss current challenges and frontiers in this field.

  13. When being narrow minded is a good thing: locally biased people show stronger contextual cueing.

    Science.gov (United States)

    Bellaera, Lauren; von Mühlenen, Adrian; Watson, Derrick G

    2014-01-01

    Repeated contexts allow us to find relevant information more easily. Learning such contexts has been proposed to depend upon either global processing of the repeated contexts, or alternatively processing of the local region surrounding the target information. In this study, we measured the extent to which observers were by default biased to process towards a more global or local level. The findings showed that the ability to use context to help guide their search was strongly related to an observer's local/global processing bias. Locally biased people could use context to help improve their search better than globally biased people. The results suggest that the extent to which context can be used depends crucially on the observer's attentional bias and thus also to factors and influences that can change this bias.

  14. High frequency source localization in a shallow ocean sound channel using frequency difference matched field processing.

    Science.gov (United States)

    Worthmann, Brian M; Song, H C; Dowling, David R

    2015-12-01

    Matched field processing (MFP) is an established technique for source localization in known multipath acoustic environments. Unfortunately, in many situations, particularly those involving high frequency signals, imperfect knowledge of the actual propagation environment prevents accurate propagation modeling and source localization via MFP fails. For beamforming applications, this actual-to-model mismatch problem was mitigated through a frequency downshift, made possible by a nonlinear array-signal-processing technique called frequency difference beamforming [Abadi, Song, and Dowling (2012). J. Acoust. Soc. Am. 132, 3018-3029]. Here, this technique is extended to conventional (Bartlett) MFP using simulations and measurements from the 2011 Kauai Acoustic Communications MURI experiment (KAM11) to produce ambiguity surfaces at frequencies well below the signal bandwidth where the detrimental effects of mismatch are reduced. Both the simulation and experimental results suggest that frequency difference MFP can be more robust against environmental mismatch than conventional MFP. In particular, signals of frequency 11.2 kHz-32.8 kHz were broadcast 3 km through a 106-m-deep shallow ocean sound channel to a sparse 16-element vertical receiving array. Frequency difference MFP unambiguously localized the source in several experimental data sets with average peak-to-side-lobe ratio of 0.9 dB, average absolute-value range error of 170 m, and average absolute-value depth error of 10 m.

  15. Perceptual representation and effectiveness of local figure?ground cues in natural contours

    OpenAIRE

    Sakai, Ko; Matsuoka, Shouhei; Kurematsu, Ken; Hatori, Yasuhiro

    2015-01-01

    A contour shape strongly influences the perceptual segregation of a figure from the ground. We investigated the contribution of local contour shape to figure–ground segregation. Although previous studies have reported local contour features that evoke figure–ground perception, they were often image features and not necessarily perceptual features. First, we examined whether contour features, specifically, convexity, closure, and symmetry, underlie the perceptual representation of natural cont...

  16. The effect of multimicrophone noise reduction systems on sound source localization by users of binaural hearing aids.

    Science.gov (United States)

    Van den Bogaert, Tim; Doclo, Simon; Wouters, Jan; Moonen, Marc

    2008-07-01

    This paper evaluates the influence of three multimicrophone noise reduction algorithms on the ability to localize sound sources. Two recently developed noise reduction techniques for binaural hearing aids were evaluated, namely, the binaural multichannel Wiener filter (MWF) and the binaural multichannel Wiener filter with partial noise estimate (MWF-N), together with a dual-monaural adaptive directional microphone (ADM), which is a widely used noise reduction approach in commercial hearing aids. The influence of the different algorithms on perceived sound source localization and their noise reduction performance was evaluated. It is shown that noise reduction algorithms can have a large influence on localization and that (a) the ADM only preserves localization in the forward direction over azimuths where limited or no noise reduction is obtained; (b) the MWF preserves localization of the target speech component but may distort localization of the noise component. The latter is dependent on signal-to-noise ratio and masking effects; (c) the MWF-N enables correct localization of both the speech and the noise components; (d) the statistical Wiener filter approach introduces a better combination of sound source localization and noise reduction performance than the ADM approach.

  17. Localizing semantic interference from distractor sounds in picture naming: A dual-task study.

    Science.gov (United States)

    Mädebach, Andreas; Kieseler, Marie-Luise; Jescheniak, Jörg D

    2017-10-13

    In this study we explored the locus of semantic interference in a novel picture-sound interference task in which participants name pictures while ignoring environmental distractor sounds. In a previous study using this task (Mädebach, Wöhner, Kieseler, & Jescheniak, in Journal of Experimental Psychology: Human Perception and Performance, 43, 1629-1646, 2017), we showed that semantically related distractor sounds (e.g., BARKING dog ) interfere with a picture-naming response (e.g., "horse") more strongly than unrelated distractor sounds do (e.g., DRUMMING drum ). In the experiment reported here, we employed the psychological refractory period (PRP) approach to explore the locus of this effect. We combined a geometric form classification task (square vs. circle; Task 1) with the picture-sound interference task (Task 2). The stimulus onset asynchrony (SOA) between the tasks was systematically varied (0 vs. 500 ms). There were three central findings. First, the semantic interference effect from distractor sounds was replicated. Second, picture naming (in Task 2) was slower with the short than with the long task SOA. Third, both effects were additive-that is, the semantic interference effects were of similar magnitude at both task SOAs. This suggests that the interference arises during response selection or later stages, not during early perceptual processing. This finding corroborates the theory that semantic interference from distractor sounds reflects a competitive selection mechanism in word production.

  18. What and Where in auditory sensory processing: A high-density electrical mapping study of distinct neural processes underlying sound object recognition and sound localization

    Directory of Open Access Journals (Sweden)

    Victoria M Leavitt

    2011-06-01

    Full Text Available Functionally distinct dorsal and ventral auditory pathways for sound localization (where and sound object recognition (what have been described in non-human primates. A handful of studies have explored differential processing within these streams in humans, with highly inconsistent findings. Stimuli employed have included simple tones, noise bursts and speech sounds, with simulated left-right spatial manipulations, and in some cases participants were not required to actively discriminate the stimuli. Our contention is that these paradigms were not well suited to dissociating processing within the two streams. Our aim here was to determine how early in processing we could find evidence for dissociable pathways using better titrated what and where task conditions. The use of more compelling tasks should allow us to amplify differential processing within the dorsal and ventral pathways. We employed high-density electrical mapping using a relatively large and environmentally realistic stimulus set (seven animal calls delivered from seven free-field spatial locations; with stimulus configuration identical across the where and what tasks. Topographic analysis revealed distinct dorsal and ventral auditory processing networks during the where and what tasks with the earliest point of divergence seen during the N1 component of the auditory evoked response, beginning at approximately 100 ms. While this difference occurred during the N1 timeframe, it was not a simple modulation of N1 amplitude as it displayed a wholly different topographic distribution to that of the N1. Global dissimilarity measures using topographic modulation analysis confirmed that this difference between tasks was driven by a shift in the underlying generator configuration. Minimum norm source reconstruction revealed distinct activations that corresponded well with activity within putative dorsal and ventral auditory structures.

  19. Effects of interaural level differences on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency-dependent shaping of binaural cues such as interaural level...... differences (ILDs) and interaural time differences (ITDs). In rooms, the sound reaching the two ears is further modified by reverberant energy, which leads to increased fluctuations in short-term ILDs and ITDs. In the present study, the effect of ILD fluctuations on the externalization of sound......, for sounds that contain frequencies above about 1 kHz the ILD fluctuations were found to be an essential cue for externalization....

  20. Local Mechanisms for Loud Sound-Enhanced Aminoglycoside Entry into Outer Hair Cells

    Directory of Open Access Journals (Sweden)

    Hongzhe eLi

    2015-04-01

    Full Text Available Loud sound exposure exacerbates aminoglycoside ototoxicity, increasing the risk of permanent hearing loss and degrading the quality of life in affected individuals. We previously reported that loud sound exposure induces temporary threshold shifts (TTS and enhances uptake of aminoglycosides, like gentamicin, by cochlear outer hair cells (OHCs. Here, we explore mechanisms by which loud sound exposure and TTS could increase aminoglycoside uptake by OHCs that may underlie this form of ototoxic synergy.Mice were exposed to loud sound levels to induce TTS, and received fluorescently-tagged gentamicin (GTTR for 30 minutes prior to fixation. The degree of TTS was assessed by comparing auditory brainstem responses before and after loud sound exposure. The number of tip links, which gate the GTTR-permeant mechanoelectrical transducer (MET channels, was determined in OHC bundles, with or without exposure to loud sound, using scanning electron microscopy.We found wide-band noise (WBN levels that induce TTS also enhance OHC uptake of GTTR compared to OHCs in control cochleae. In cochlear regions with TTS, the increase in OHC uptake of GTTR was significantly greater than in adjacent pillar cells. In control mice, we identified stereociliary tip links at ~50% of potential positions in OHC bundles. However, the number of OHC tip links was significantly reduced in mice that received WBN at levels capable of inducing TTS.These data suggest that GTTR uptake by OHCs during TTS occurs by increased permeation of surviving, mechanically-gated MET channels, and/or non-MET aminoglycoside-permeant channels activated following loud sound exposure. Loss of tip links would hyperpolarize hair cells and potentially increase drug uptake via aminoglycoside-permeant channels expressed by hair cells. The effect of TTS on aminoglycoside-permeant channel kinetics will shed new light on the mechanisms of loud sound-enhanced aminoglycoside uptake, and consequently on ototoxic

  1. Sound lateralization test in adolescent blind individuals.

    Science.gov (United States)

    Yabe, Takao; Kaga, Kimitaka

    2005-06-21

    Blind individuals require to compensate for the lack of visual information by other sensory inputs. In particular, auditory inputs are crucial to such individuals. To investigate whether blind individuals localize sound in space better than sighted individuals, we tested the auditory ability of adolescent blind individuals using a sound lateralization method. The interaural time difference discrimination thresholds of blind individuals were statistically significantly shorter than those of blind individuals with residual vision and controls. These findings suggest that blind individuals have better auditory spatial ability than individuals with visual cues; therefore, some perceptual compensation occurred in the former.

  2. A "looming bias" in spatial hearing? Effects of acoustic intensity and spectrum on categorical sound source localization.

    Science.gov (United States)

    McCarthy, Lisa; Olsen, Kirk N

    2017-01-01

    Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /ә/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.

  3. Experimental analysis of considering the sound pressure distribution pattern at the ear canal entrance as an unrevealed head-related localization clue

    Institute of Scientific and Technical Information of China (English)

    TONG Xin; QI Na; MENG Zihou

    2018-01-01

    By analyzing the differences between binaural recording and real listening,it was deduced that there were some unrevealed auditory localization clues,and the sound pressure distribution pattern at the entrance of ear canal was probably a clue.It was proved through the listening test that the unrevealed auditory localization clues really exist with the reduction to absurdity.And the effective frequency bands of the unrevealed localization clues were induced and summed.The result of finite element based simulations showed that the pressure distribution at the entrance of ear canal was non-uniform,and the pattern was related to the direction of sound source.And it was proved that the sound pressure distribution pattern at the entrance of the ear canal carried the sound source direction information and could be used as an unrevealed localization cluc.The frequency bands in which the sound pressure distribution patterns had significant differences between front and back sound source directions were roughly matched with the effective frequency bands of unrevealed localization clues obtained from the listening tests.To some extent,it supports the hypothesis that the sound pressure distribution pattern could be a kind of unrevealed auditory localization clues.

  4. Evidence for cue-independent spatial representation in the human auditory cortex during active listening.

    Science.gov (United States)

    Higgins, Nathan C; McLaughlin, Susan A; Rinne, Teemu; Stecker, G Christopher

    2017-09-05

    Few auditory functions are as important or as universal as the capacity for auditory spatial awareness (e.g., sound localization). That ability relies on sensitivity to acoustical cues-particularly interaural time and level differences (ITD and ILD)-that correlate with sound-source locations. Under nonspatial listening conditions, cortical sensitivity to ITD and ILD takes the form of broad contralaterally dominated response functions. It is unknown, however, whether that sensitivity reflects representations of the specific physical cues or a higher-order representation of auditory space (i.e., integrated cue processing), nor is it known whether responses to spatial cues are modulated by active spatial listening. To investigate, sensitivity to parametrically varied ITD or ILD cues was measured using fMRI during spatial and nonspatial listening tasks. Task type varied across blocks where targets were presented in one of three dimensions: auditory location, pitch, or visual brightness. Task effects were localized primarily to lateral posterior superior temporal gyrus (pSTG) and modulated binaural-cue response functions differently in the two hemispheres. Active spatial listening (location tasks) enhanced both contralateral and ipsilateral responses in the right hemisphere but maintained or enhanced contralateral dominance in the left hemisphere. Two observations suggest integrated processing of ITD and ILD. First, overlapping regions in medial pSTG exhibited significant sensitivity to both cues. Second, successful classification of multivoxel patterns was observed for both cue types and-critically-for cross-cue classification. Together, these results suggest a higher-order representation of auditory space in the human auditory cortex that at least partly integrates the specific underlying cues.

  5. Ionospheric Electron Densities at Mars: Comparison of Mars Express Ionospheric Sounding and MAVEN Local Measurement

    Czech Academy of Sciences Publication Activity Database

    Němec, F.; Morgan, D. D.; Fowler, C.M.; Kopf, A.J.; Andersson, L.; Gurnett, D. A.; Andrews, D.J.; Truhlík, Vladimír

    2017-01-01

    Roč. 122, č. 12 (2017), s. 12393-12405 E-ISSN 2169-9402 Institutional support: RVO:68378289 Keywords : Mars * ionosphere * MARSIS * Mars Express * MAVEN * radar sounding Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics OBOR OECD: Astronomy (including astrophysics,space science) http://onlinelibrary.wiley.com/doi/10.1002/2017JA024629/full

  6. Hoeren unter Wasser: Absolute Reizschwellen und Richtungswahrnehnumg (Underwater Hearing: Absolute Thresholds and Sound Localization),

    Science.gov (United States)

    The article deals first with the theoretical foundations of underwater hearing, and the effects of the acoustical characteristics of water on hearing...lead to the conclusion that, in water , man can locate the direction of sound at low and at very high tonal frequencies of the audio range, but this ability is probably vanishing in the middle range of frequencies. (Author)

  7. Emphasis of spatial cues in the temporal fine structure during the rising segments of amplitude-modulated sounds II: single-neuron recordings

    Science.gov (United States)

    Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David

    2014-01-01

    Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782

  8. Neuronal specializations for the processing of interaural difference cues in the chick

    Directory of Open Access Journals (Sweden)

    Harunori eOhmori

    2014-05-01

    Full Text Available Sound information is encoded as a series of spikes of the auditory nerve fibers (ANFs, and then transmitted to the brainstem auditory nuclei. Features such as timing and level are extracted from ANFs activity and further processed as the interaural time difference (ITD and the interaural level difference (ILD, respectively. These two interaural difference cues are used for sound source localization by behaving animals. Both cues depend on the head size of animals and are extremely small, requiring specialized neural properties in order to process these cues with precision. Moreover, the sound level and timing cues are not processed independently from one another. Neurons in the nucleus angularis (NA are specialized for coding sound level information in birds and the ILD is processed in the posterior part of the dorsal lateral lemniscus nucleus (LLDp. Processing of ILD is affected by the phase difference of binaural sound. Temporal features of sound are encoded in the pathway starting in nucleus magnocellularis (NM, and ITD is processed in the nucleus laminaris (NL. In this pathway a variety of specializations are found in synapse morphology, neuronal excitability, distribution of ion channels and receptors along the tonotopic axis, which reduces spike timing fluctuation in the ANFs-NM synapse, and imparts precise and stable ITD processing to the NL. Moreover, the contrast of ITD processing in NL is enhanced over a wide range of sound level through the activity of GABAergic inhibitory systems from both the superior olivary nucleus (SON and local inhibitory neurons that follow monosynaptic to NM activity.

  9. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    Science.gov (United States)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  10. Auditory disorders and acquisition of the ability to localize sound in children born to HIV-positive mothers

    Directory of Open Access Journals (Sweden)

    Carla Gentile Matas

    Full Text Available The objective of the present study was to evaluate children born to HIV-infected mothers and to determine whether such children present auditory disorders or poor acquisition of the ability to localize sound. The population studied included 143 children (82 males and 61 females, ranging in age from one month to 30 months. The children were divided into three groups according to the classification system devised in 1994 by the Centers for Disease Control and Prevention: infected; seroreverted; and exposed. The children were then submitted to audiological evaluation, including behavioral audiometry, visual reinforcement audiometry and measurement of acoustic immittance. Statistical analysis showed that the incidence of auditory disorders was significantly higher in the infected group. In the seroreverted and exposed groups, there was a marked absence of auditory disorders. In the infected group as a whole, the findings were suggestive of central auditory disorders. Evolution of the ability to localize sound was found to be poorer among the children in the infected group than among those in the seroreverted and exposed groups.

  11. Effects of Interaural Level and Time Differences on the Externalization of Sound

    DEFF Research Database (Denmark)

    Dau, Torsten; Catic, Jasmina; Santurette, Sébastien

    Distant sound sources in our environment are perceived as externalized and are thus properly localized in both direction and distance. This is due to the acoustic filtering by the head, torso, and external ears, which provides frequency dependent shaping of binaural cues, such as interaural level...... differences (ILDs) and interaural time differences (ITDs). Further, the binaural cues provided by reverberation in an enclosed space may also contribute to externalization. While these spatial cues are available in their natural form when listening to real-world sound sources, hearing-aid signal processing...... is consistent with the physical analysis that showed that a decreased distance to the sound source also reduced the fluctuations in ILDs....

  12. Relevance of Spectral Cues for Auditory Spatial Processing in the Occipital Cortex of the Blind

    Science.gov (United States)

    Voss, Patrice; Lepore, Franco; Gougoux, Frédéric; Zatorre, Robert J.

    2011-01-01

    We have previously shown that some blind individuals can localize sounds more accurately than their sighted counterparts when one ear is obstructed, and that this ability is strongly associated with occipital cortex activity. Given that spectral cues are important for monaurally localizing sounds when one ear is obstructed, and that blind individuals are more sensitive to small spectral differences, we hypothesized that enhanced use of spectral cues via occipital cortex mechanisms could explain the better performance of blind individuals in monaural localization. Using positron-emission tomography (PET), we scanned blind and sighted persons as they discriminated between sounds originating from a single spatial position, but with different spectral profiles that simulated different spatial positions based on head-related transfer functions. We show here that a sub-group of early blind individuals showing superior monaural sound localization abilities performed significantly better than any other group on this spectral discrimination task. For all groups, performance was best for stimuli simulating peripheral positions, consistent with the notion that spectral cues are more helpful for discriminating peripheral sources. PET results showed that all blind groups showed cerebral blood flow increases in the occipital cortex; but this was also the case in the sighted group. A voxel-wise covariation analysis showed that more occipital recruitment was associated with better performance across all blind subjects but not the sighted. An inter-regional covariation analysis showed that the occipital activity in the blind covaried with that of several frontal and parietal regions known for their role in auditory spatial processing. Overall, these results support the notion that the superior ability of a sub-group of early-blind individuals to localize sounds is mediated by their superior ability to use spectral cues, and that this ability is subserved by cortical processing in

  13. Effects of user training with electronically-modulated sound transmission hearing protectors and the open ear on horizontal localization ability.

    Science.gov (United States)

    Casali, John G; Robinette, Martin B

    2015-02-01

    To determine if training with electronically-modulated hearing protection (EMHP) and the open ear results in auditory learning on a horizontal localization task. Baseline localization testing was conducted in three listening conditions (open-ear, in-the-ear (ITE) EMHP, and over-the-ear (OTE) EMHP). Participants then wore either an ITE or OTE EMHP for 12, almost daily, one-hour training sessions. After training was complete, participants again underwent localization testing in all three listening conditions. A computer with a custom software and hardware interface presented localization sounds and collected participant responses. Twelve participants were recruited from the student population at Virginia Tech. Audiometric requirements were 35 dBHL at 500, 1000, and 2000 Hz bilaterally, and 55 dBHL at 4000 Hz in at least one ear. Pre-training localization performance with an ITE or OTE EMHP was worse than open-ear performance. After training with any given listening condition, including open-ear, performance in that listening condition improved, in part from a practice effect. However, post-training localization performance showed near equal performance between the open-ear and training EMHP. Auditory learning occurred for the training EMHP, but not for the non-training EMHP; that is, there was no significant training crossover effect between the ITE and the OTE devices. It is evident from this study that auditory learning (improved horizontal localization performance) occurred with the EMHP for which training was performed. However, performance improvements found with the training EMHP were not realized in the non-training EMHP. Furthermore, localization performance in the open-ear condition also benefitted from training on the task.

  14. Sound and sound sources

    DEFF Research Database (Denmark)

    Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-01-01

    There is no difference in principle between the infrasonic and ultrasonic sounds, which are inaudible to humans (or other animals) and the sounds that we can hear. In all cases, sound is a wave of pressure and particle oscillations propagating through an elastic medium, such as air. This chapter...... is about the physical laws that govern how animals produce sound signals and how physical principles determine the signals’ frequency content and sound level, the nature of the sound field (sound pressure versus particle vibrations) as well as directional properties of the emitted signal. Many...... of these properties are dictated by simple physical relationships between the size of the sound emitter and the wavelength of emitted sound. The wavelengths of the signals need to be sufficiently short in relation to the size of the emitter to allow for the efficient production of propagating sound pressure waves...

  15. Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss.

    Science.gov (United States)

    Kolarik, Andrew J; Moore, Brian C J; Zahorik, Pavel; Cirstea, Silvia; Pardhan, Shahina

    2016-02-01

    Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.

  16. 77 FR 33967 - Special Local Regulations; OPSAIL 2012 Connecticut, Niantic Bay, Long Island Sound, Thames River...

    Science.gov (United States)

    2012-06-08

    ..., Operation Sail, Inc., is planning to publish information on the event in local newspapers, internet sites... areas for viewing the ``Parade of Sail'' have been established to allow for maximum use of the waterways... SLIS or designated representative. Before the effective period, the Coast Guard will make notifications...

  17. Segregating Top-Down Selective Attention from Response Inhibition in a Spatial Cueing Go/NoGo Task: An ERP and Source Localization Study.

    Science.gov (United States)

    Hong, Xiangfei; Wang, Yao; Sun, Junfeng; Li, Chunbo; Tong, Shanbao

    2017-08-29

    Successfully inhibiting a prepotent response tendency requires the attentional detection of signals which cue response cancellation. Although neuroimaging studies have identified important roles of stimulus-driven processing in the attentional detection, the effects of top-down control were scarcely investigated. In this study, scalp EEG was recorded from thirty-two participants during a modified Go/NoGo task, in which a spatial-cueing approach was implemented to manipulate top-down selective attention. We observed classical event-related potential components, including N2 and P3, in the attended condition of response inhibition. While in the ignored condition of response inhibition, a smaller P3 was observed and N2 was absent. The correlation between P3 and CNV during the foreperiod suggested an inhibitory role of P3 in both conditions. Furthermore, source analysis suggested that P3 generation was mainly localized to the midcingulate cortex, and the attended condition showed increased activation relative to the ignored condition in several regions, including inferior frontal gyrus, middle frontal gyrus, precentral gyrus, insula and uncus, suggesting that these regions were involved in top-down attentional control rather than inhibitory processing. Taken together, by segregating electrophysiological correlates of top-down selective attention from those of response inhibition, our findings provide new insights in understanding the neural mechanisms of response inhibition.

  18. 76 FR 39292 - Special Local Regulations & Safety Zones; Marine Events in Captain of the Port Long Island Sound...

    Science.gov (United States)

    2011-07-06

    ... Port Long Island Sound Zone AGENCY: Coast Guard, DHS. ACTION: Temporary final rule. SUMMARY: The Coast... and fireworks displays within the Captain of the Port (COTP) Long Island Sound Zone. This action is... Island Sound. DATES: This rule is effective in the CFR on July 6, 2011 through 6 p.m. on October 2, 2011...

  19. Somatic stem cell differentiation is regulated by PI3K/Tor signaling in response to local cues.

    Science.gov (United States)

    Amoyel, Marc; Hillion, Kenzo-Hugo; Margolis, Shally R; Bach, Erika A

    2016-11-01

    Stem cells reside in niches that provide signals to maintain self-renewal, and differentiation is viewed as a passive process that depends on loss of access to these signals. Here, we demonstrate that the differentiation of somatic cyst stem cells (CySCs) in the Drosophila testis is actively promoted by PI3K/Tor signaling, as CySCs lacking PI3K/Tor activity cannot differentiate properly. We find that an insulin peptide produced by somatic cells immediately outside of the stem cell niche acts locally to promote somatic differentiation through Insulin-like receptor (InR) activation. These results indicate that there is a local 'differentiation' niche that upregulates PI3K/Tor signaling in the early daughters of CySCs. Finally, we demonstrate that CySCs secrete the Dilp-binding protein ImpL2, the Drosophila homolog of IGFBP7, into the stem cell niche, which blocks InR activation in CySCs. Thus, we show that somatic cell differentiation is controlled by PI3K/Tor signaling downstream of InR and that the local production of positive and negative InR signals regulates the differentiation niche. These results support a model in which leaving the stem cell niche and initiating differentiation are actively induced by signaling. © 2016. Published by The Company of Biologists Ltd.

  20. The stability of second sound waves in a rotating Darcy–Brinkman porous layer in local thermal non-equilibrium

    Energy Technology Data Exchange (ETDEWEB)

    Eltayeb, I A; Elbashir, T B A, E-mail: ieltayeb@squ.edu.om, E-mail: elbashir@squ.edu.om [Department of Mathematics and Statistics, College of Science, Sultan Qaboos University, Muscat 123 (Oman)

    2017-08-15

    The linear and nonlinear stabilities of second sound waves in a rotating porous Darcy–Brinkman layer in local thermal non-equilibrium are studied when the heat flux in the solid obeys the Cattaneo law. The simultaneous action of the Brinkman effect (effective viscosity) and rotation is shown to destabilise the layer, as compared to either of them acting alone, for both stationary and overstable modes. The effective viscosity tends to favour overstable modes while rotation tends to favour stationary convection. Rapid rotation invokes a negative viscosity effect that suppresses the stabilising effect of porosity so that the stability characteristics resemble those of the classical rotating Benard layer. A formal weakly nonlinear analysis yields evolution equations of the Landau–Stuart type governing the slow time development of the amplitudes of the unstable waves. The equilibrium points of the evolution equations are analysed and the overall development of the amplitudes is examined. Both overstable and stationary modes can exhibit supercritical stability; supercritical instability, subcritical instability and stability are not possible. The dependence of the supercritical stability on the relative values of the six dimensionless parameters representing thermal non-equilibrium, rotation, porosity, relaxation time, thermal diffusivities and Brinkman effect is illustrated as regions in regime diagrams in the parameter space. The dependence of the heat transfer and the mean heat flux on the parameters of the problem is also discussed. (paper)

  1. Contribution of a 3D ray tracing model in a complex medium to the localization of infra-sound sources

    International Nuclear Information System (INIS)

    Mialle, Pierrick

    2007-01-01

    Localisation of infra-sound sources is a difficult task due to large propagation distances at stake and because of the atmospheric complexity. In order to resolve this problem, one can seek as many necessary information as the comprehension of wave propagation, the role and influence of the atmosphere and its spatio-temporal variations, the knowledge of sources and detection parameters, but also the configuration of the stations and their global spreading. Two methods based on the construction of propagation tables depending on station, date and time are introduced. Those tables require a long range propagation tool to simulate the propagation through a complex medium, which are carried out by WASP-3D Sph a 3D paraxial ray tracing based-theory tool integrating both amplitude estimation and horizontal wind fields in space and time. Tables are centered on the receptor. They describe spatial variations of the main observation parameters and offer a snapshot of the atmospheric propagation depending on the range for every simulated phase. For each path, celerity, azimuth deviation, attenuation and return altitude are predicted and allow building the tables. The latter help to identify detected phases and are integrated in an accurate localization procedure. The procedure is tested on three case study, such as the explosion of gas-pipeline in Belgium 2004 near Ghislenghien, the explosion of a military facility in 2007 in Novaky, Slovakia and the explosion of the Buncefield oil depot in 2005 in the United Kingdom, where event specificities, propagation parameters and used configurations are introduced. The accuracy and optimization of the localization are discussed. A validation study is presented regarding International Monitoring System stations along a meridian - I18DK (Greenland, Denmark), I51UK (Bermuda, United Kingdom), I25FR (Guyane, France), I08BO (La Paz, Bolivia), I01AR (Paso Flores, Argentina), I02AR (Ushuaia, Argentina), I54US (Antarctica, U.S.A.) - to

  2. The influence of signal parameters on the sound source localization ability of a harbor porpoise (Phocoena phocoena)

    NARCIS (Netherlands)

    Kastelein, R.A.; Haan, D.de; Verboom, W.C.

    2007-01-01

    It is unclear how well harbor porpoises can locate sound sources, and thus can locate acoustic alarms on gillnets. Therefore the ability of a porpoise to determine the location of a sound source was determined. The animal was trained to indicate the active one of 16 transducers in a 16-m -diam

  3. Comparison between bilateral cochlear implants and Neurelec Digisonic(®) SP Binaural cochlear implant: speech perception, sound localization and patient self-assessment.

    Science.gov (United States)

    Bonnard, Damien; Lautissier, Sylvie; Bosset-Audoit, Amélie; Coriat, Géraldine; Beraha, Max; Maunoury, Antoine; Martel, Jacques; Darrouzet, Vincent; Bébéar, Jean-Pierre; Dauman, René

    2013-01-01

    An alternative to bilateral cochlear implantation is offered by the Neurelec Digisonic(®) SP Binaural cochlear implant, which allows stimulation of both cochleae within a single device. The purpose of this prospective study was to compare a group of Neurelec Digisonic(®) SP Binaural implant users (denoted BINAURAL group, n = 7) with a group of bilateral adult cochlear implant users (denoted BILATERAL group, n = 6) in terms of speech perception, sound localization, and self-assessment of health status and hearing disability. Speech perception was assessed using word recognition at 60 dB SPL in quiet and in a 'cocktail party' noise delivered through five loudspeakers in the hemi-sound field facing the patient (signal-to-noise ratio = +10 dB). The sound localization task was to determine the source of a sound stimulus among five speakers positioned between -90° and +90° from midline. Change in health status was assessed using the Glasgow Benefit Inventory and hearing disability was evaluated with the Abbreviated Profile of Hearing Aid Benefit. Speech perception was not statistically different between the two groups, even though there was a trend in favor of the BINAURAL group (mean percent word recognition in the BINAURAL and BILATERAL groups: 70 vs. 56.7% in quiet, 55.7 vs. 43.3% in noise). There was also no significant difference with regard to performance in sound localization and self-assessment of health status and hearing disability. On the basis of the BINAURAL group's performance in hearing tasks involving the detection of interaural differences, implantation with the Neurelec Digisonic(®) SP Binaural implant may be considered to restore effective binaural hearing. Based on these first comparative results, this device seems to provide benefits similar to those of traditional bilateral cochlear implantation, with a new approach to stimulate both auditory nerves. Copyright © 2013 S. Karger AG, Basel.

  4. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  5. Improvements of sound localization abilities by the facial ruff of the barn owl (Tyto alba as demonstrated by virtual ruff removal.

    Directory of Open Access Journals (Sweden)

    Laura Hausmann

    Full Text Available BACKGROUND: When sound arrives at the eardrum it has already been filtered by the body, head, and outer ear. This process is mathematically described by the head-related transfer functions (HRTFs, which are characteristic for the spatial position of a sound source and for the individual ear. HRTFs in the barn owl (Tyto alba are also shaped by the facial ruff, a specialization that alters interaural time differences (ITD, interaural intensity differences (ILD, and the frequency spectrum of the incoming sound to improve sound localization. Here we created novel stimuli to simulate the removal of the barn owl's ruff in a virtual acoustic environment, thus creating a situation similar to passive listening in other animals, and used these stimuli in behavioral tests. METHODOLOGY/PRINCIPAL FINDINGS: HRTFs were recorded from an owl before and after removal of the ruff feathers. Normal and ruff-removed conditions were created by filtering broadband noise with the HRTFs. Under normal virtual conditions, no differences in azimuthal head-turning behavior between individualized and non-individualized HRTFs were observed. The owls were able to respond differently to stimuli from the back than to stimuli from the front having the same ITD. By contrast, such a discrimination was not possible after the virtual removal of the ruff. Elevational head-turn angles were (slightly smaller with non-individualized than with individualized HRTFs. The removal of the ruff resulted in a large decrease in elevational head-turning amplitudes. CONCLUSIONS/SIGNIFICANCE: The facial ruff a improves azimuthal sound localization by increasing the ITD range and b improves elevational sound localization in the frontal field by introducing a shift of iso-ILD lines out of the midsagittal plane, which causes ILDs to increase with increasing stimulus elevation. The changes at the behavioral level could be related to the changes in the binaural physical parameters that occurred after the

  6. The zebrafish tailbud contains two independent populations of midline progenitor cells that maintain long-term germ layer plasticity and differentiate in response to local signaling cues

    Science.gov (United States)

    Row, Richard H.; Tsotras, Steve R.; Goto, Hana; Martin, Benjamin L.

    2016-01-01

    Vertebrate body axis formation depends on a population of bipotential neuromesodermal cells along the posterior wall of the tailbud that make a germ layer decision after gastrulation to form spinal cord and mesoderm. Despite exhibiting germ layer plasticity, these cells never give rise to midline tissues of the notochord, floor plate and dorsal endoderm, raising the question of whether midline tissues also arise from basal posterior progenitors after gastrulation. We show in zebrafish that local posterior signals specify germ layer fate in two basal tailbud midline progenitor populations. Wnt signaling induces notochord within a population of notochord/floor plate bipotential cells through negative transcriptional regulation of sox2. Notch signaling, required for hypochord induction during gastrulation, continues to act in the tailbud to specify hypochord from a notochord/hypochord bipotential cell population. Our results lend strong support to a continuous allocation model of midline tissue formation in zebrafish, and provide an embryological basis for zebrafish and mouse bifurcated notochord phenotypes as well as the rare human congenital split notochord syndrome. We demonstrate developmental equivalency between the tailbud progenitor cell populations. Midline progenitors can be transfated from notochord to somite fate after gastrulation by ectopic expression of msgn1, a master regulator of paraxial mesoderm fate, or if transplanted into the bipotential progenitors that normally give rise to somites. Our results indicate that the entire non-epidermal posterior body is derived from discrete, basal tailbud cell populations. These cells remain receptive to extracellular cues after gastrulation and continue to make basic germ layer decisions. PMID:26674311

  7. Trading of dynamic interaural time and level difference cues and its effect on the auditory motion-onset response measured with electroencephalography.

    Science.gov (United States)

    Altmann, Christian F; Ueda, Ryuhei; Bucher, Benoit; Furukawa, Shigeto; Ono, Kentaro; Kashino, Makio; Mima, Tatsuya; Fukuyama, Hidenao

    2017-10-01

    Interaural time (ITD) and level differences (ILD) constitute the two main cues for sound localization in the horizontal plane. Despite extensive research in animal models and humans, the mechanism of how these two cues are integrated into a unified percept is still far from clear. In this study, our aim was to test with human electroencephalography (EEG) whether integration of dynamic ITD and ILD cues is reflected in the so-called motion-onset response (MOR), an evoked potential elicited by moving sound sources. To this end, ITD and ILD trajectories were determined individually by cue trading psychophysics. We then measured EEG while subjects were presented with either static click-trains or click-trains that contained a dynamic portion at the end. The dynamic part was created by combining ITD with ILD either congruently to elicit the percept of a right/leftward moving sound, or incongruently to elicit the percept of a static sound. In two experiments that differed in the method to derive individual dynamic cue trading stimuli, we observed an MOR with at least a change-N1 (cN1) component for both the congruent and incongruent conditions at about 160-190 ms after motion-onset. A significant change-P2 (cP2) component for both the congruent and incongruent ITD/ILD combination was found only in the second experiment peaking at about 250 ms after motion onset. In sum, this study shows that a sound which - by a combination of counter-balanced ITD and ILD cues - induces a static percept can still elicit a motion-onset response, indicative of independent ITD and ILD processing at the level of the MOR - a component that has been proposed to be, at least partly, generated in non-primary auditory cortex. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  9. Estimating location without external cues.

    Directory of Open Access Journals (Sweden)

    Allen Cheung

    2014-10-01

    Full Text Available The ability to determine one's location is fundamental to spatial navigation. Here, it is shown that localization is theoretically possible without the use of external cues, and without knowledge of initial position or orientation. With only error-prone self-motion estimates as input, a fully disoriented agent can, in principle, determine its location in familiar spaces with 1-fold rotational symmetry. Surprisingly, localization does not require the sensing of any external cue, including the boundary. The combination of self-motion estimates and an internal map of the arena provide enough information for localization. This stands in conflict with the supposition that 2D arenas are analogous to open fields. Using a rodent error model, it is shown that the localization performance which can be achieved is enough to initiate and maintain stable firing patterns like those of grid cells, starting from full disorientation. Successful localization was achieved when the rotational asymmetry was due to the external boundary, an interior barrier or a void space within an arena. Optimal localization performance was found to depend on arena shape, arena size, local and global rotational asymmetry, and the structure of the path taken during localization. Since allothetic cues including visual and boundary contact cues were not present, localization necessarily relied on the fusion of idiothetic self-motion cues and memory of the boundary. Implications for spatial navigation mechanisms are discussed, including possible relationships with place field overdispersion and hippocampal reverse replay. Based on these results, experiments are suggested to identify if and where information fusion occurs in the mammalian spatial memory system.

  10. Three-year experience with the Sophono in children with congenital conductive unilateral hearing loss: tolerability, audiometry, and sound localization compared to a bone-anchored hearing aid.

    Science.gov (United States)

    Nelissen, Rik C; Agterberg, Martijn J H; Hol, Myrthe K S; Snik, Ad F M

    2016-10-01

    Bone conduction devices (BCDs) are advocated as an amplification option for patients with congenital conductive unilateral hearing loss (UHL), while other treatment options could also be considered. The current study compared a transcutaneous BCD (Sophono) with a percutaneous BCD (bone-anchored hearing aid, BAHA) in 12 children with congenital conductive UHL. Tolerability, audiometry, and sound localization abilities with both types of BCD were studied retrospectively. The mean follow-up was 3.6 years for the Sophono users (n = 6) and 4.7 years for the BAHA users (n = 6). In each group, two patients had stopped using their BCD. Tolerability was favorable for the Sophono. Aided thresholds with the Sophono were unsatisfactory, as they did not reach under a mean pure tone average of 30 dB HL. Sound localization generally improved with both the Sophono and the BAHA, although localization abilities did not reach the level of normal hearing children. These findings, together with previously reported outcomes, are important to take into account when counseling patients and their caretakers. The selection of a suitable amplification option should always be made deliberately and on individual basis for each patient in this diverse group of children with congenital conductive UHL.

  11. Conditioned responses elicited by experimentally produced cues for smoking.

    Science.gov (United States)

    Mucha, R F; Pauli, P; Angrilli, A

    1998-03-01

    Several theories of drug-craving postulate that a signal for drug elicits conditioned responses. However, depending on the theory, a drug cue is said to elicit drug similar, drug compensatory, positive motivational, and negative motivational effects. Since animal data alone cannot tease apart the relative importance of different cue-related processes in the addict, we developed and examined a model of drug cues in the human based on a two-sound, differential conditioning procedure using smoking as the reinforcer. After multiple pairings of a sound with smoking, there was a preference for the smoking cue on a conditioned preference test. The acute effects of smoking (increased heart rate, respiration rate, skin conductance level, skin conductance fluctuations, EEG beta power and trapezius EMG, decreased alpha power) were not affected by the smoking cue, although subjects drew more on their cigarette in the presence of the smoking cue than in the presence of a control cue. Moreover, the cue did not change baseline behaviour except for a possible increase in EEG beta power and an increase in trapezius EMG at about the time when smoking should have occurred. The findings confirm the value of experimental models of drug cues in the human for comparing different cue phenomena in the dependent individual. They indicate that an acquired signal for drug in the human may elicit incentive motivational effects and associated preparatory motor responses in addition to possible conditioned tolerance.

  12. Human Sound Externalization in Reverberant Environments

    DEFF Research Database (Denmark)

    Catic, Jasmina

    In everyday environments, listeners perceive sound sources as externalized. In listening conditions where the spatial cues that are relevant for externalization are not represented correctly, such as when listening through headphones or hearing aids, a degraded perception of externalization may...... occur. In this thesis, the spatial cues that arise from a combined effect of filtering due to the head, torso, and pinna and the acoustic environment were analysed and the impact of such cues for the perception of externalization in different frequency regions was investigated. Distant sound sources...... were simulated via headphones using individualized binaural room impulse responses (BRIRs). An investigation of the influence of spectral content of a sound source on externalization showed that effective externalization cues are present across the entire frequency range. The fluctuation of interaural...

  13. Speech cues contribute to audiovisual spatial integration.

    Directory of Open Access Journals (Sweden)

    Christopher W Bishop

    Full Text Available Speech is the most important form of human communication but ambient sounds and competing talkers often degrade its acoustics. Fortunately the brain can use visual information, especially its highly precise spatial information, to improve speech comprehension in noisy environments. Previous studies have demonstrated that audiovisual integration depends strongly on spatiotemporal factors. However, some integrative phenomena such as McGurk interference persist even with gross spatial disparities, suggesting that spatial alignment is not necessary for robust integration of audiovisual place-of-articulation cues. It is therefore unclear how speech-cues interact with audiovisual spatial integration mechanisms. Here, we combine two well established psychophysical phenomena, the McGurk effect and the ventriloquist's illusion, to explore this dependency. Our results demonstrate that conflicting spatial cues may not interfere with audiovisual integration of speech, but conflicting speech-cues can impede integration in space. This suggests a direct but asymmetrical influence between ventral 'what' and dorsal 'where' pathways.

  14. Foley Sounds vs Real Sounds

    DEFF Research Database (Denmark)

    Trento, Stefano; Götzen, Amalia De

    2011-01-01

    This paper is an initial attempt to study the world of sound effects for motion pictures, also known as Foley sounds. Throughout several audio and audio-video tests we have compared both Foley and real sounds originated by an identical action. The main purpose was to evaluate if sound effects...

  15. The effect of interaural-level-difference fluctuations on the externalization of sound

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Buchholz, Jörg M.

    2013-01-01

    Real-world sound sources are usually perceived as externalized and thus properly localized in both direction and distance. This is largely due to (1) the acoustic filtering by the head, torso, and pinna, resulting in modifications of the signal spectrum and thereby a frequency-dependent shaping...... of interaural cues and (2) interaural cues provided by the reverberation inside an enclosed space. This study first investigated the effect of room reverberation on the spectro-temporal behavior of interaural level differences (ILDs) by analyzing dummy-head recordings of speech played at different distances...... in a standard listening room. Next, the effect of ILD fluctuations on the degree of externalization was investigated in a psychoacoustic experiment performed in the same listening room. Individual binaural impulse responses were used to simulate a distant sound source delivered via headphones. The ILDs were...

  16. Cortical processing of dynamic sound envelope transitions.

    Science.gov (United States)

    Zhou, Yi; Wang, Xiaoqin

    2010-12-08

    Slow envelope fluctuations in the range of 2-20 Hz provide important segmental cues for processing communication sounds. For a successful segmentation, a neural processor must capture envelope features associated with the rise and fall of signal energy, a process that is often challenged by the interference of background noise. This study investigated the neural representations of slowly varying envelopes in quiet and in background noise in the primary auditory cortex (A1) of awake marmoset monkeys. We characterized envelope features based on the local average and rate of change of sound level in envelope waveforms and identified envelope features to which neurons were selective by reverse correlation. Our results showed that envelope feature selectivity of A1 neurons was correlated with the degree of nonmonotonicity in their static rate-level functions. Nonmonotonic neurons exhibited greater feature selectivity than monotonic neurons in quiet and in background noise. The diverse envelope feature selectivity decreased spike-timing correlation among A1 neurons in response to the same envelope waveforms. As a result, the variability, but not the average, of the ensemble responses of A1 neurons represented more faithfully the dynamic transitions in low-frequency sound envelopes both in quiet and in background noise.

  17. Word segmentation with universal prosodic cues.

    Science.gov (United States)

    Endress, Ansgar D; Hauser, Marc D

    2010-09-01

    When listening to speech from one's native language, words seem to be well separated from one another, like beads on a string. When listening to a foreign language, in contrast, words seem almost impossible to extract, as if there was only one bead on the same string. This contrast reveals that there are language-specific cues to segmentation. The puzzle, however, is that infants must be endowed with a language-independent mechanism for segmentation, as they ultimately solve the segmentation problem for any native language. Here, we approach the acquisition problem by asking whether there are language-independent cues to segmentation that might be available to even adult learners who have already acquired a native language. We show that adult learners recognize words in connected speech when only prosodic cues to word-boundaries are given from languages unfamiliar to the participants. In both artificial and natural speech, adult English speakers, with no prior exposure to the test languages, readily recognized words in natural languages with critically different prosodic patterns, including French, Turkish and Hungarian. We suggest that, even though languages differ in their sound structures, they carry universal prosodic characteristics. Further, these language-invariant prosodic cues provide a universally accessible mechanism for finding words in connected speech. These cues may enable infants to start acquiring words in any language even before they are fine-tuned to the sound structure of their native language. Copyright © 2010. Published by Elsevier Inc.

  18. Negative emotion provides cues for orienting auditory spatial attention

    Directory of Open Access Journals (Sweden)

    Erkin eAsutay

    2015-05-01

    Full Text Available The auditory stimuli provide information about the objects and events around us. They can also carry biologically significant emotional information (such as unseen dangers and conspecific vocalizations, which provides cues for allocation of attention and mental resources. Here, we investigated whether task-irrelevant auditory emotional information can provide cues for orientation of auditory spatial attention. We employed a covert spatial orienting task: the dot-probe task. In each trial, two task irrelevant auditory cues were simultaneously presented at two separate locations (left-right or front-back. Environmental sounds were selected to form emotional vs. neutral, emotional vs. emotional, and neutral vs. neutral cue pairs. The participants’ task was to detect the location of an acoustic target that was presented immediately after the task-irrelevant auditory cues. The target was presented at the same location as one of the auditory cues. The results indicated that participants were significantly faster to locate the target when it replaced the negative cue compared to when it replaced the neutral cue. The positive cues did not produce a clear attentional bias. Further, same valence pairs (emotional-emotional or neutral-neutral did not modulate reaction times due to a lack of spatial attention capture by one cue in the pair. Taken together, the results indicate that negative affect can provide cues for the orientation of spatial attention in the auditory domain.

  19. The role of reverberation-related binaural cues in the externalization of speech

    DEFF Research Database (Denmark)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-01-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners’ ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones....... The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient...

  20. Imagining Sound

    DEFF Research Database (Denmark)

    Grimshaw, Mark; Garner, Tom Alexander

    2014-01-01

    We make the case in this essay that sound that is imagined is both a perception and as much a sound as that perceived through external stimulation. To argue this, we look at the evidence from auditory science, neuroscience, and philosophy, briefly present some new conceptual thinking on sound...... that accounts for this view, and then use this to look at what the future might hold in the context of imagining sound and developing technology....

  1. Localização sonora em usuários de aparelhos de amplificação sonora individual Sound localization by hearing aid users

    Directory of Open Access Journals (Sweden)

    Paula Cristina Rodrigues

    2010-06-01

    Full Text Available OBJETIVO: comparar o desempenho, no teste de localização de fontes sonoras, de usuários de aparelhos de amplificação sonora individual (AASI do tipo retroauricular e intracanal, com o desempenho de ouvintes normais, nos planos espaciais horizontal e sagital mediano, para as frequências de 500, 2.000 e 4.500 Hz; e correlacionar os acertos no teste de localização sonora com o tempo de uso dos AASI. MÉTODOS: foram testados oito ouvintes normais e 20 usuários de próteses auditivas, subdivididos em dois grupos. Um formado por 10 indivíduos usuários de próteses auditivas do tipo intracanal e o outro grupo formado por 10 usuários de próteses auditivas do tipo retroauricular. Todos foram submetidos ao teste de localização de fontes sonoras, no qual foram apresentados, aleatoriamente, três tipos de ondas quadradas, com frequências fundamentais em 0,5 kHz, 2 kHz e 4,5 kHz, na intensidade de 70 dBA. RESULTADOS: encontrou-se percentuais de acertos médios de 78,4%, 72,2% e 72,9% para os ouvintes normais, em 0,5 kHz, 2 kHz e 4,5 kHz, respectivamente e 40,1%, 39,4% e 41,7% para os usuários de aparelho de amplificação sonora individual. Quanto aos tipos de aparelhos, os usuários do modelo intracanal acertaram a origem da fonte sonora em 47,2% das vezes e os usuários do modelo retroauricular em 37,4% das vezes. Não foi observada correlação entre o percentual de acertos no teste de localização sonora e o tempo de uso da prótese auditiva. CONCLUSÃO: ouvintes normais localizam as fontes sonoras de maneira mais eficiente que os usuários de aparelho de amplificação sonora individual e, dentre estes, os que utilizam o modelo intracanal obtiveram melhor desempenho. Além disso, o tempo de uso não interferiu no desempenho para localizar a origem das fontes sonoras.PURPOSE: to compare the sound localization performance of hearing aids users, with the performance of normal hearing in the horizontal and sagittal planes, at 0.5, 2 and 4

  2. LARA. Localization of an automatized refueling machine by acoustical sounding in breeder reactors - implementation of artificial intelligence techniques

    International Nuclear Information System (INIS)

    Lhuillier, C.; Malvache, P.

    1987-01-01

    The automatic control of the machine which handles the nuclear subassemblies in fast neutron reactors requires autonomous perception and decision tools. An acoustical device allows the machine to position in the work area. Artificial intelligence techniques are implemented to interpret the data: pattern recognition, scene analysis. The localization process is managed by an expert system. 6 refs.; 8 figs

  3. Characteristic sounds facilitate visual search.

    Science.gov (United States)

    Iordanescu, Lucica; Guzman-Martinez, Emmanuel; Grabowecky, Marcia; Suzuki, Satoru

    2008-06-01

    In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing "meow" did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.

  4. Training the Brain to Weight Speech Cues Differently: A Study of Finnish Second-language Users of English

    Science.gov (United States)

    Ylinen, Sari; Uther, Maria; Latvala, Antti; Vepsalainen, Sara; Iverson, Paul; Akahane-Yamada, Reiko; Naatanen, Risto

    2010-01-01

    Foreign-language learning is a prime example of a task that entails perceptual learning. The correct comprehension of foreign-language speech requires the correct recognition of speech sounds. The most difficult speech-sound contrasts for foreign-language learners often are the ones that have multiple phonetic cues, especially if the cues are…

  5. Unsound Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2016-01-01

    This article discusses the change in premise that digitally produced sound brings about and how digital technologies more generally have changed our relationship to the musical artifact, not simply in degree but in kind. It demonstrates how our acoustical conceptions are thoroughly challenged...... by the digital production of sound and, by questioning the ontological basis for digital sound, turns our understanding of the core term substance upside down....

  6. Listeners' expectation of room acoustical parameters based on visual cues

    Science.gov (United States)

    Valente, Daniel L.

    Despite many studies investigating auditory spatial impressions in rooms, few have addressed the impact of simultaneous visual cues on localization and the perception of spaciousness. The current research presents an immersive audio-visual study, in which participants are instructed to make spatial congruency and quantity judgments in dynamic cross-modal environments. The results of these psychophysical tests suggest the importance of consilient audio-visual presentation to the legibility of an auditory scene. Several studies have looked into audio-visual interaction in room perception in recent years, but these studies rely on static images, speech signals, or photographs alone to represent the visual scene. Building on these studies, the aim is to propose a testing method that uses monochromatic compositing (blue-screen technique) to position a studio recording of a musical performance in a number of virtual acoustical environments and ask subjects to assess these environments. In the first experiment of the study, video footage was taken from five rooms varying in physical size from a small studio to a small performance hall. Participants were asked to perceptually align two distinct acoustical parameters---early-to-late reverberant energy ratio and reverberation time---of two solo musical performances in five contrasting visual environments according to their expectations of how the room should sound given its visual appearance. In the second experiment in the study, video footage shot from four different listening positions within a general-purpose space was coupled with sounds derived from measured binaural impulse responses (IRs). The relationship between the presented image, sound, and virtual receiver position was examined. It was found that many visual cues caused different perceived events of the acoustic environment. This included the visual attributes of the space in which the performance was located as well as the visual attributes of the performer

  7. Sound Absorbers

    Science.gov (United States)

    Fuchs, H. V.; Möser, M.

    Sound absorption indicates the transformation of sound energy into heat. It is, for instance, employed to design the acoustics in rooms. The noise emitted by machinery and plants shall be reduced before arriving at a workplace; auditoria such as lecture rooms or concert halls require a certain reverberation time. Such design goals are realised by installing absorbing components at the walls with well-defined absorption characteristics, which are adjusted for corresponding demands. Sound absorbers also play an important role in acoustic capsules, ducts and screens to avoid sound immission from noise intensive environments into the neighbourhood.

  8. Whose Line Sound is it Anyway? Identifying the Vocalizer on Underwater Video by Localizing with a Hydrophone Array

    Directory of Open Access Journals (Sweden)

    Matthias Hoffmann-Kuhnt

    2016-11-01

    Full Text Available A new device that combined high-resolution (1080p wide-angle video and three channels of high-frequency acoustic recordings (at 500 kHz per channel in a portable underwater housing was designed and tested with wild bottlenose and spotted dolphins in the Bahamas. It consisted of three hydrophones, a GoPro camera, a small Fit PC, a set of custom preamplifiers and a high-frequency data acquisition board. Recordings were obtained to identify individual vocalizing animals through time-delay-of-arrival localizing in post-processing. The calculated source positions were then overlaid onto the video – providing the ability to identify the vocalizing animal on the recorded video. The new tool allowed for much clearer analysis of the acoustic behavior of cetaceans than was possible before.

  9. The role of reverberation-related binaural cues in the externalization of speech.

    Science.gov (United States)

    Catic, Jasmina; Santurette, Sébastien; Dau, Torsten

    2015-08-01

    The perception of externalization of speech sounds was investigated with respect to the monaural and binaural cues available at the listeners' ears in a reverberant environment. Individualized binaural room impulse responses (BRIRs) were used to simulate externalized sound sources via headphones. The measured BRIRs were subsequently modified such that the proportion of the response containing binaural vs monaural information was varied. Normal-hearing listeners were presented with speech sounds convolved with such modified BRIRs. Monaural reverberation cues were found to be sufficient for the externalization of a lateral sound source. In contrast, for a frontal source, an increased amount of binaural cues from reflections was required in order to obtain well externalized sound images. It was demonstrated that the interaction between the interaural cues of the direct sound and the reverberation strongly affects the perception of externalization. An analysis of the short-term binaural cues showed that the amount of fluctuations of the binaural cues corresponded well to the externalization ratings obtained in the listening tests. The results further suggested that the precedence effect is involved in the auditory processing of the dynamic binaural cues that are utilized for externalization perception.

  10. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2008-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  11. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2010-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  12. Sound generator

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2007-01-01

    A sound generator, particularly a loudspeaker, configured to emit sound, comprising a rigid element (2) enclosing a plurality of air compartments (3), wherein the rigid element (2) has a back side (B) comprising apertures (4), and a front side (F) that is closed, wherein the generator is provided

  13. Sound Zones

    DEFF Research Database (Denmark)

    Møller, Martin Bo; Olsen, Martin

    2017-01-01

    Sound zones, i.e. spatially confined regions of individual audio content, can be created by appropriate filtering of the desired audio signals reproduced by an array of loudspeakers. The challenge of designing filters for sound zones is twofold: First, the filtered responses should generate...... an acoustic separation between the control regions. Secondly, the pre- and post-ringing as well as spectral deterioration introduced by the filters should be minimized. The tradeoff between acoustic separation and filter ringing is the focus of this paper. A weighted L2-norm penalty is introduced in the sound...

  14. Exploiting Deep Neural Networks and Head Movements for Robust Binaural Localization of Multiple Sources in Reverberant Environments

    DEFF Research Database (Denmark)

    Ma, Ning; May, Tobias; Brown, Guy J.

    2017-01-01

    This paper presents a novel machine-hearing system that exploits deep neural networks (DNNs) and head movements for robust binaural localization of multiple sources in reverberant environments. DNNs are used to learn the relationship between the source azimuth and binaural cues, consisting...... of the complete cross-correlation function (CCF) and interaural level differences (ILDs). In contrast to many previous binaural hearing systems, the proposed approach is not restricted to localization of sound sources in the frontal hemifield. Due to the similarity of binaural cues in the frontal and rear...

  15. Effect of background noise on neuronal coding of interaural level difference cues in rat inferior colliculus.

    Science.gov (United States)

    Mokri, Yasamin; Worland, Kate; Ford, Mark; Rajan, Ramesh

    2015-07-01

    Humans can accurately localize sounds even in unfavourable signal-to-noise conditions. To investigate the neural mechanisms underlying this, we studied the effect of background wide-band noise on neural sensitivity to variations in interaural level difference (ILD), the predominant cue for sound localization in azimuth for high-frequency sounds, at the characteristic frequency of cells in rat inferior colliculus (IC). Binaural noise at high levels generally resulted in suppression of responses (55.8%), but at lower levels resulted in enhancement (34.8%) as well as suppression (30.3%). When recording conditions permitted, we then examined if any binaural noise effects were related to selective noise effects at each of the two ears, which we interpreted in light of well-known differences in input type (excitation and inhibition) from each ear shaping particular forms of ILD sensitivity in the IC. At high signal-to-noise ratios (SNR), in most ILD functions (41%), the effect of background noise appeared to be due to effects on inputs from both ears, while for a large percentage (35.8%) appeared to be accounted for by effects on excitatory input. However, as SNR decreased, change in excitation became the dominant contributor to the change due to binaural background noise (63.6%). These novel findings shed light on the IC neural mechanisms for sound localization in the presence of continuous background noise. They also suggest that some effects of background noise on encoding of sound location reported to be emergent in upstream auditory areas can also be observed at the level of the midbrain. © 2015 The Authors. European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  16. Sound intensity

    DEFF Research Database (Denmark)

    Crocker, Malcolm J.; Jacobsen, Finn

    1998-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  17. Sound Intensity

    DEFF Research Database (Denmark)

    Crocker, M.J.; Jacobsen, Finn

    1997-01-01

    This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique.......This chapter is an overview, intended for readers with no special knowledge about this particular topic. The chapter deals with all aspects of sound intensity and its measurement from the fundamental theoretical background to practical applications of the measurement technique....

  18. Localization of a sound source in in a guided medium and reverberating field. Contribution to a study on leak localization in the internal wall of containment of a nuclear reactor in the case of a severe reactor accident

    International Nuclear Information System (INIS)

    Thomann, F.

    1996-01-01

    Basic data necessary for the localization of a leak in the internal wall of the containment are presented by studying the sound generated by gas jets coming out of (leaking fissures) as well as propagation in a guided medium. The results acquired have led us to choose the simple intercorrelation method and the matched filed processing method, both of which are likely to adequately handle our problems. Whereas the intercorrelation method appears to be limited in scope when dealing in the guided medium, the matched field processing is suited to leak localization over a surface of approximately 1000 m 2 (for a total surface of 10 000 m 2 ). Preliminary studies on the leak signal and on replicated signals have led us to limit the frequency band to 2600 - 3000 Hz. We have succeeded in locating a leak situated in an ordinary position with a minimum amount of replicated signals and basic data. We have improved on the estimation of Bartlett and MVDE (minimum variance distortion less filter) rendering them even more effective. Afterwards, we considered the severe accident situation and showed that the system can be installed in situ. (author)

  19. The Role of Place Cues in Voluntary Stream Segregation for Cochlear Implant Users

    DEFF Research Database (Denmark)

    Paredes Gallardo, Andreu; Madsen, Sara Miay Kim; Dau, Torsten

    2018-01-01

    of the A and B sequences should improve performance. In Experiment 1, the electrode separation and the sequence duration were varied to clarify whether place cues help CI listeners to voluntarily segregate sounds and whether a two-stream percept needs time to build up. Results suggested that place cues can...

  20. Sound Localization in Multisource Environments

    Science.gov (United States)

    2009-03-01

    A total of 7 paid volunteer listeners (3 males and 4 females, 20-25 years of age ) par- ticipated in the experiment. All had normal hearing (i.e...effects of the loudspeaker frequency responses, and were then sent from an experimental control computer to a Mark of the Unicorn (MOTU 24 I/O) digital-to...after the overall multisource stimulus has been presented (the ’post-cue’ condition). 3.2 Methods 3.2.1 Listeners Eight listeners, ranging in age from

  1. Attentional and Contextual Priors in Sound Perception.

    Science.gov (United States)

    Wolmetz, Michael; Elhilali, Mounya

    2016-01-01

    Behavioral and neural studies of selective attention have consistently demonstrated that explicit attentional cues to particular perceptual features profoundly alter perception and performance. The statistics of the sensory environment can also provide cues about what perceptual features to expect, but the extent to which these more implicit contextual cues impact perception and performance, as well as their relationship to explicit attentional cues, is not well understood. In this study, the explicit cues, or attentional prior probabilities, and the implicit cues, or contextual prior probabilities, associated with different acoustic frequencies in a detection task were simultaneously manipulated. Both attentional and contextual priors had similarly large but independent impacts on sound detectability, with evidence that listeners tracked and used contextual priors for a variety of sound classes (pure tones, harmonic complexes, and vowels). Further analyses showed that listeners updated their contextual priors rapidly and optimally, given the changing acoustic frequency statistics inherent in the paradigm. A Bayesian Observer model accounted for both attentional and contextual adaptations found with listeners. These results bolster the interpretation of perception as Bayesian inference, and suggest that some effects attributed to selective attention may be a special case of contextual prior integration along a feature axis.

  2. Boosting Vocabulary Learning by Verbal Cueing During Sleep.

    Science.gov (United States)

    Schreiner, Thomas; Rasch, Björn

    2015-11-01

    Reactivating memories during sleep by re-exposure to associated memory cues (e.g., odors or sounds) improves memory consolidation. Here, we tested for the first time whether verbal cueing during sleep can improve vocabulary learning. We cued prior learned Dutch words either during non-rapid eye movement sleep (NonREM) or during active or passive waking. Re-exposure to Dutch words during sleep improved later memory for the German translation of the cued words when compared with uncued words. Recall of uncued words was similar to an additional group receiving no verbal cues during sleep. Furthermore, verbal cueing failed to improve memory during active and passive waking. High-density electroencephalographic recordings revealed that successful verbal cueing during NonREM sleep is associated with a pronounced frontal negativity in event-related potentials, a higher frequency of frontal slow waves as well as a cueing-related increase in right frontal and left parietal oscillatory theta power. Our results indicate that verbal cues presented during NonREM sleep reactivate associated memories, and facilitate later recall of foreign vocabulary without impairing ongoing consolidation processes. Likewise, our oscillatory analysis suggests that both sleep-specific slow waves as well as theta oscillations (typically associated with successful memory encoding during wakefulness) might be involved in strengthening memories by cueing during sleep. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  3. Recycling Sounds in Commercials

    DEFF Research Database (Denmark)

    Larsen, Charlotte Rørdam

    2012-01-01

    Commercials offer the opportunity for intergenerational memory and impinge on cultural memory. TV commercials for foodstuffs often make reference to past times as a way of authenticating products. This is frequently achieved using visual cues, but in this paper I would like to demonstrate how...... such references to the past and ‘the good old days’ can be achieved through sounds. In particular, I will look at commercials for Danish non-dairy spreads, especially for OMA margarine. These commercials are notable in that they contain a melody and a slogan – ‘Say the name: OMA margarine’ – that have basically...... remained the same for 70 years. Together these identifiers make OMA an interesting Danish case to study. With reference to Ann Rigney’s memorial practices or mechanisms, the study aims to demonstrate how the auditory aspects of Danish margarine commercials for frying tend to be limited in variety...

  4. Using Auditory Cues to Perceptually Extract Visual Data in Collaborative, Immersive Big-Data Display Systems

    Science.gov (United States)

    Lee, Wendy

    The advent of multisensory display systems, such as virtual and augmented reality, has fostered a new relationship between humans and space. Not only can these systems mimic real-world environments, they have the ability to create a new space typology made solely of data. In these spaces, two-dimensional information is displayed in three dimensions, requiring human senses to be used to understand virtual, attention-based elements. Studies in the field of big data have predominately focused on visual representations and extractions of information with little focus on sounds. The goal of this research is to evaluate the most efficient methods of perceptually extracting visual data using auditory stimuli in immersive environments. Using Rensselaer's CRAIVE-Lab, a virtual reality space with 360-degree panorama visuals and an array of 128 loudspeakers, participants were asked questions based on complex visual displays using a variety of auditory cues ranging from sine tones to camera shutter sounds. Analysis of the speed and accuracy of participant responses revealed that auditory cues that were more favorable for localization and were positively perceived were best for data extraction and could help create more user-friendly systems in the future.

  5. Fluid Sounds

    DEFF Research Database (Denmark)

    Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects and in arch......Explorations and analysis of soundscapes have, since Canadian R. Murray Schafer's work during the early 1970's, developed into various established research - and artistic disciplines. The interest in sonic environments is today present within a broad range of contemporary art projects...... and in architectural design. Aesthetics, psychoacoustics, perception, and cognition are all present in this expanding field embracing such categories as soundscape composition, sound art, sonic art, sound design, sound studies and auditory culture. Of greatest significance to the overall field is the investigation...

  6. Seeing 'where' through the ears: effects of learning-by-doing and long-term sensory deprivation on localization based on image-to-sound substitution.

    Directory of Open Access Journals (Sweden)

    Michael J Proulx

    Full Text Available BACKGROUND: Sensory substitution devices for the blind translate inaccessible visual information into a format that intact sensory pathways can process. We here tested image-to-sound conversion-based localization of visual stimuli (LEDs and objects in 13 blindfolded participants. METHODS AND FINDINGS: Subjects were assigned to different roles as a function of two variables: visual deprivation (blindfolded continuously (Bc for 24 hours per day for 21 days; blindfolded for the tests only (Bt and system use (system not used (Sn; system used for tests only (St; system used continuously for 21 days (Sc. The effect of learning-by-doing was assessed by comparing the performance of eight subjects (BtSt who only used the mobile substitution device for the tests, to that of three subjects who, in addition, practiced with it for four hours daily in their normal life (BtSc and BcSc; two subjects who did not use the device at all (BtSn and BcSn allowed assessment of its use in the tasks we employed. The impact of long-term sensory deprivation was investigated by blindfolding three of those participants throughout the three week-long experiment (BcSn, BcSn/c, and BcSc; the other ten subjects were only blindfolded during the tests (BtSn, BtSc, and the eight BtSt subjects. Expectedly, the two subjects who never used the substitution device, while fast in finding the targets, had chance accuracy, whereas subjects who used the device were markedly slower, but showed much better accuracy which improved significantly across our four testing sessions. The three subjects who freely used the device daily as well as during tests were faster and more accurate than those who used it during tests only; however, long-term blindfolding did not notably influence performance. CONCLUSIONS: Together, the results demonstrate that the device allowed blindfolded subjects to increasingly know where something was by listening, and indicate that practice in naturalistic conditions

  7. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice......Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  8. Nuclear sound

    International Nuclear Information System (INIS)

    Wambach, J.

    1991-01-01

    Nuclei, like more familiar mechanical systems, undergo simple vibrational motion. Among these vibrations, sound modes are of particular interest since they reveal important information on the effective interactions among the constituents and, through extrapolation, on the bulk behaviour of nuclear and neutron matter. Sound wave propagation in nuclei shows strong quantum effects familiar from other quantum systems. Microscopic theory suggests that the restoring forces are caused by the complex structure of the many-Fermion wavefunction and, in some cases, have no classical analogue. The damping of the vibrational amplitude is strongly influenced by phase coherence among the particles participating in the motion. (author)

  9. Graded behavioral responses and habituation to sound in the common cuttlefish, Sepia officinalis

    NARCIS (Netherlands)

    Samson, J.E.; Mooney, T.A.; Gussekloo, S.W.S.; Hanlon, R.T.

    2014-01-01

    Sound is a widely available and vital cue in aquatic environments yet most bioacoustic research has focused on marine vertebrates, leaving sound detection in invertebrates poorly understood. Cephalopods are an ecologically key taxon that likely use sound and may be impacted by increasing

  10. Sound Settlements

    DEFF Research Database (Denmark)

    Mortensen, Peder Duelund; Hornyanszky, Elisabeth Dalholm; Larsen, Jacob Norvig

    2013-01-01

    Præsentation af projektresultater fra Interreg forskningen Sound Settlements om udvikling af bæredygtighed i det almene boligbyggerier i København, Malmø, Helsingborg og Lund samt europæiske eksempler på best practice...

  11. Second Sound

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 4; Issue 6. Second Sound - The Role of Elastic Waves. R Srinivasan. General Article Volume 4 Issue 6 June 1999 pp 15-19. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/004/06/0015-0019 ...

  12. Competition between auditory and visual spatial cues during visual task performance

    NARCIS (Netherlands)

    Koelewijn, T.; Bronkhorst, A.; Theeuwes, J.

    2009-01-01

    There is debate in the crossmodal cueing literature as to whether capture of visual attention by means of sound is a fully automatic process. Recent studies show that when visual attention is endogenously focused sound still captures attention. The current study investigated whether there is

  13. Say what? Coral reef sounds as indicators of community assemblages and reef conditions

    Science.gov (United States)

    Mooney, T. A.; Kaplan, M. B.

    2016-02-01

    Coral reefs host some of the highest diversity of life on the planet. Unfortunately, reef health and biodiversity is declining or is threatened as a result of climate change and human influences. Tracking these changes is necessary for effective resource management, yet estimating marine biodiversity and tracking trends in ecosystem health is a challenging and expensive task, especially in many pristine reefs which are remote and difficult to access. Many fishes, mammals and invertebrates make sound. These sounds are reflective of a number of vital biological processes and are a cue for settling reef larvae. Biological sounds may be a means to quantify ecosystem health and biodiversity, however the relationship between coral reef soundscapes and the actual taxa present remains largely unknown. This study presents a comparative evaluation of the soundscape of multiple reefs, naturally differing in benthic cover and fish diversity, in the U.S. Virgin Islands National Park. Using multiple recorders per reef we characterized spacio-temporal variation in biological sound production within and among reefs. Analyses of sounds recorded over 4 summer months indicated diel trends in both fish and snapping shrimp acoustic frequency bands with crepuscular peaks at all reefs. There were small but statistically significant acoustic differences among sites on a given reef raising the possibility of potentially localized acoustic habitats. The strength of diel trends in lower, fish-frequency bands were correlated with coral cover and fish density, yet no such relationship was found with shrimp sounds suggesting that fish sounds may be of higher relevance to tracking certain coral reef conditions. These findings indicate that, in spite of considerable variability within reef soundscapes, diel trends in low-frequency sound production reflect reef community assemblages. Further, monitoring soundscapes may be an efficient means of establishing and monitoring reef conditions.

  14. Sound Rhythms Are Encoded by Postinhibitory Rebound Spiking in the Superior Paraolivary Nucleus

    Science.gov (United States)

    Felix, Richard A.; Fridberger, Anders; Leijon, Sara; Berrebi, Albert S.; Magnusson, Anna K.

    2013-01-01

    The superior paraolivary nucleus (SPON) is a prominent structure in the auditory brainstem. In contrast to the principal superior olivary nuclei with identified roles in processing binaural sound localization cues, the role of the SPON in hearing is not well understood. A combined in vitro and in vivo approach was used to investigate the cellular properties of SPON neurons in the mouse. Patch-clamp recordings in brain slices revealed that brief and well timed postinhibitory rebound spiking, generated by the interaction of two subthreshold-activated ion currents, is a hallmark of SPON neurons. The Ih current determines the timing of the rebound, whereas the T-type Ca2+ current boosts the rebound to spike threshold. This precisely timed rebound spiking provides a physiological explanation for the sensitivity of SPON neurons to sinusoidally amplitude-modulated (SAM) tones in vivo, where peaks in the sound envelope drive inhibitory inputs and SPON neurons fire action potentials during the waveform troughs. Consistent with this notion, SPON neurons display intrinsic tuning to frequency-modulated sinusoidal currents (1–15Hz) in vitro and discharge with strong synchrony to SAMs with modulation frequencies between 1 and 20 Hz in vivo. The results of this study suggest that the SPON is particularly well suited to encode rhythmic sound patterns. Such temporal periodicity information is likely important for detection of communication cues, such as the acoustic envelopes of animal vocalizations and speech signals. PMID:21880918

  15. Interaction of Object Binding Cues in Binaural Masking Pattern Experiments.

    Science.gov (United States)

    Verhey, Jesko L; Lübken, Björn; van de Par, Steven

    2016-01-01

    Object binding cues such as binaural and across-frequency modulation cues are likely to be used by the auditory system to separate sounds from different sources in complex auditory scenes. The present study investigates the interaction of these cues in a binaural masking pattern paradigm where a sinusoidal target is masked by a narrowband noise. It was hypothesised that beating between signal and masker may contribute to signal detection when signal and masker do not spectrally overlap but that this cue could not be used in combination with interaural cues. To test this hypothesis an additional sinusoidal interferer was added to the noise masker with a lower frequency than the noise whereas the target had a higher frequency than the noise. Thresholds increase when the interferer is added. This effect is largest when the spectral interferer-masker and masker-target distances are equal. The result supports the hypothesis that modulation cues contribute to signal detection in the classical masking paradigm and that these are analysed with modulation bandpass filters. A monaural model including an across-frequency modulation process is presented that account for this effect. Interestingly, the interferer also affects dichotic thresholds indicating that modulation cues also play a role in binaural processing.

  16. Veering re-visited: noise and posture cues in walking without sight.

    Science.gov (United States)

    Millar, S

    1999-01-01

    Effects of sound and posture cues on veering from the straight-ahead were tested with young blind children in an unfamiliar space that lacked orienting cues. In a pre-test with a previously heard target sound, all subjects walked straight to the target. A recording device, which sampled the locomotor trajectories automatically, showed that, without prior cues from target locations, subjects tended to veer more to the side from which they heard a brief, irrelevant noise. Carrying a load on one side produced more veering to the opposite side. The detailed samples showed that, underlying the main trajectories, were alternating concave and convex (left and right) movements, suggesting stepwise changes in body position. It is argued that the same external and body-centred cues that contribute to reference-frame orientation for locomotion when they converge and concur, influence the direction of veering when the cues occur in isolation in environments that lack converging reference information.

  17. PREFACE: Aerodynamic sound Aerodynamic sound

    Science.gov (United States)

    Akishita, Sadao

    2010-02-01

    The modern theory of aerodynamic sound originates from Lighthill's two papers in 1952 and 1954, as is well known. I have heard that Lighthill was motivated in writing the papers by the jet-noise emitted by the newly commercialized jet-engined airplanes at that time. The technology of aerodynamic sound is destined for environmental problems. Therefore the theory should always be applied to newly emerged public nuisances. This issue of Fluid Dynamics Research (FDR) reflects problems of environmental sound in present Japanese technology. The Japanese community studying aerodynamic sound has held an annual symposium since 29 years ago when the late Professor S Kotake and Professor S Kaji of Teikyo University organized the symposium. Most of the Japanese authors in this issue are members of the annual symposium. I should note the contribution of the two professors cited above in establishing the Japanese community of aerodynamic sound research. It is my pleasure to present the publication in this issue of ten papers discussed at the annual symposium. I would like to express many thanks to the Editorial Board of FDR for giving us the chance to contribute these papers. We have a review paper by T Suzuki on the study of jet noise, which continues to be important nowadays, and is expected to reform the theoretical model of generating mechanisms. Professor M S Howe and R S McGowan contribute an analytical paper, a valuable study in today's fluid dynamics research. They apply hydrodynamics to solve the compressible flow generated in the vocal cords of the human body. Experimental study continues to be the main methodology in aerodynamic sound, and it is expected to explore new horizons. H Fujita's study on the Aeolian tone provides a new viewpoint on major, longstanding sound problems. The paper by M Nishimura and T Goto on textile fabrics describes new technology for the effective reduction of bluff-body noise. The paper by T Sueki et al also reports new technology for the

  18. Sound Visualisation

    OpenAIRE

    Dolenc, Peter

    2013-01-01

    This thesis contains a description of a construction of subwoofer case that has an extra functionality of being able to produce special visual effects and display visualizations that match the currently playing sound. For this reason, multiple lighting elements made out of LED (Light Emitting Diode) diodes were installed onto the subwoofer case. The lighting elements are controlled by dedicated software that was also developed. The software runs on STM32F4-Discovery evaluation board inside a ...

  19. A configural dominant account of contextual cueing: Configural cues are stronger than colour cues.

    Science.gov (United States)

    Kunar, Melina A; John, Rebecca; Sweetman, Hollie

    2014-01-01

    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed "contextual cueing" (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they appeared together. In Experiment 1, participants searched for a target that was cued by both colour and distractor configural cues, compared with when the target was only predicted by configural information. The results showed that the addition of a colour cue did not increase contextual cueing. In Experiment 2, participants searched for a target that was cued by both colour and distractor configuration compared with when the target was only cued by colour. The results showed that adding a predictive configural cue led to a stronger CC benefit. Experiments 3 and 4 tested the disruptive effects of removing either a learned colour cue or a learned configural cue and whether there was cue competition when colour and configural cues were presented together. Removing the configural cue was more disruptive to CC than removing colour, and configural learning was shown to overshadow the learning of colour cues. The data support a configural dominant account of CC, where configural cues act as the stronger cue in comparison to colour when they are presented together.

  20. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  1. Reacting to Neighborhood Cues?

    DEFF Research Database (Denmark)

    Danckert, Bolette; Dinesen, Peter Thisted; Sønderskov, Kim Mannemar

    2017-01-01

    is founded on politically sophisticated individuals having a greater comprehension of news and other mass-mediated sources, which makes them less likely to rely on neighborhood cues as sources of information relevant for political attitudes. Based on a unique panel data set with fine-grained information...

  2. Conveying Looming with a Localized Tactile Cue

    Science.gov (United States)

    2015-04-01

    used to feel forward, in order to be warned of obstacles and passages. Moreover, when people are deprived of a normal sense of touch in their feet...evidence that vibrotactile flow fields can be exploited to modify feelings of self-motion. Kolev and Rupert (2008) reported that vibrotactile flow could...1970) later described the use of this site for two-dimensional tracking of moving targets. More recently, while testing sites for a tactile prosthesis

  3. The medial prefrontal cortex and memory of cue location in the rat.

    Science.gov (United States)

    Rawson, Tim; O'Kane, Michael; Talk, Andrew

    2010-01-01

    We developed a single-trial cue-location memory task in which rats experienced an auditory cue while exploring an environment. They then recalled and avoided the sound origination point after the cue was paired with shock in a separate context. Subjects with medial prefrontal cortical (mPFC) lesions made no such avoidance response, but both lesioned and control subjects avoided the cue itself when presented at test. A follow up assessment revealed no spatial learning impairment in either group. These findings suggest that the rodent mPFC is required for incidental learning or recollection of the location at which a discrete cue occurred, but is not required for cue recognition or for allocentric spatial memory. Copyright 2009 Elsevier Inc. All rights reserved.

  4. The Dual-channel Extreme Ultraviolet Continuum Experiment: Sounding Rocket EUV Observations of Local B Stars to Determine Their Potential for Supplying Intergalactic Ionizing Radiation

    Science.gov (United States)

    Erickson, Nicholas; Green, James C.; France, Kevin; Stocke, John T.; Nell, Nicholas

    2018-06-01

    We describe the scientific motivation and technical development of the Dual-channel Extreme Ultraviolet Continuum Experiment (DEUCE). DEUCE is a sounding rocket payload designed to obtain the first flux-calibrated spectra of two nearby B stars in the EUV 650-1150Å bandpass. This measurement will help in understanding the ionizing flux output of hot B stars, calibrating stellar models and commenting on the potential contribution of such stars to reionization. DEUCE consists of a grazing incidence Wolter II telescope, a normal incidence holographic grating, and the largest (8” x 8”) microchannel plate detector ever flown in space, covering the 650-1150Å band in medium and low resolution channels. DEUCE will launch on December 1, 2018 as NASA/CU sounding rocket mission 36.331 UG, observing Epsilon Canis Majoris, a B2 II star.

  5. The influence of imagery vividness on cognitive and perceptual cues in circular auditorily-induced vection

    Directory of Open Access Journals (Sweden)

    Aleksander eVäljamäe

    2014-12-01

    Full Text Available In the absence of other congruent multisensory motion cues, sound contribution to illusions of self-motion (vection is relatively weak and often attributed to purely cognitive, top-down processes. The present study addressed the influence of cognitive and perceptual factors in the experience of circular, yaw auditorily-induced vection (AIV, focusing on participants’ imagery vividness scores. We used different rotating sound sources (acoustic landmark vs. movable types and their filtered versions that provided different binaural cues (interaural time or level differences, ITD vs. ILD when delivering via loudspeaker array. The significant differences in circular vection intensity showed that 1 AIV was stronger for rotating sound fields containing auditory landmarks as compared to movable sound objects; 2 ITD based acoustic cues were more instrumental than ILD based ones for horizontal AIV; and 3 individual differences in imagery vividness significantly influenced the effects of contextual and perceptual cues. While participants with high scores of kinesthetic and visual imagery were helped by vection ``rich cues, i.e. acoustic landmarks and ITD cues, the participants from the low-vivid imagery group did not benefit from these cues automatically. Only when specifically asked to use their imagination intentionally did these external cues start influencing vection sensation in similar way to high-vivid imagers. These findings are in line with the recent fMRI work which suggested that high-vivid imagers employ automatic, almost unconscious mechanisms in imagery generation, while low-vivid imagers rely on more schematic and conscious framework. Consequently, our results provide an additional insight into the interaction between perceptual and contextual cues when experiencing purely auditorily or multisensorily induced vection.

  6. Sounds of Space

    Science.gov (United States)

    Gurnett, D. A.

    2005-12-01

    Starting in the early 1960s, spacecraft-borne plasma wave instruments revealed that space is filled with an astonishing variety of radio and plasma wave sounds, which have come to be called "sounds of space." For over forty years these sounds have been collected and played to a wide variety of audiences, often as the result of press conferences or press releases involving various NASA projects for which the University of Iowa has provided plasma wave instruments. This activity has led to many interviews on local and national radio programs, and occasionally on programs haviang world-wide coverage, such as the BBC. As a result of this media coverage, we have been approached many times by composers requesting copies of our space sounds for use in their various projects, many of which involve electronic synthesis of music. One of these collaborations led to "Sun Rings," which is a musical event produced by the Kronos Quartet that has played to large audiences all over the world. With the availability of modern computer graphic techniques we have recently been attempting to integrate some of these sound of space into an educational audio/video web site that illustrates the scientific principles involved in the origin of space plasma waves. Typically I try to emphasize that a substantial gas pressure exists everywhere in space in the form of an ionized gas called a plasma, and that this plasma can lead to a wide variety of wave phenomenon. Examples of some of this audio/video material will be presented.

  7. Mind your pricing cues.

    Science.gov (United States)

    Anderson, Eric; Simester, Duncan

    2003-09-01

    For most of the items they buy, consumers don't have an accurate sense of what the price should be. Ask them to guess how much a four-pack of 35-mm film costs, and you'll get a variety of wrong answers: Most people will underestimate; many will only shrug. Research shows that consumers' knowledge of the market is so far from perfect that it hardly deserves to be called knowledge at all. Yet people happily buy film and other products every day. Is this because they don't care what kind of deal they're getting? No. Remarkably, it's because they rely on retailers to tell them whether they're getting a good price. In subtle and not-so-subtle ways, retailers send signals to customers, telling them whether a given price is relatively high or low. In this article, the authors review several common pricing cues retailers use--"sale" signs, prices that end in 9, signpost items, and price-matching guarantees. They also offer some surprising facts about how--and how well--those cues work. For instance, the authors' tests with several mail-order catalogs reveal that including the word "sale" beside a price can increase demand by more than 50%. The practice of using a 9 at the end of a price to denote a bargain is so common, you'd think customers would be numb to it. Yet in a study the authors did involving a women's clothing catalog, they increased demand by a third just by changing the price of a dress from $34 to $39. Pricing cues are powerful tools for guiding customers' purchasing decisions, but they must be applied judiciously. Used inappropriately, the cues may breach customers' trust, reduce brand equity, and give rise to lawsuits.

  8. Otite média recorrente e habilidade de localização sonora em pré-escolares Otitis media and sound localization ability in preschool children

    Directory of Open Access Journals (Sweden)

    Aveliny Mantovan Lima-Gregio

    2010-12-01

    Full Text Available OBJETIVO: comparar o desempenho de 40 pré-escolares no teste de localização sonora com as respostas de seus pais para um questionário que investigou a ocorrência de episódios de otite média (OM e os sintomas indicativos de desordens audiológicas e do processamento auditivo. MÉTODOS: após aplicação e análise das respostas do questionário, dois grupos foram formados: GO, com histórico de OM, e GC, sem este histórico. Cada grupo com 20 pré-escolares de ambos os gêneros foi submetido ao teste de localização da fonte sonora em cinco direções (Pereira, 1993. RESULTADOS: a comparação entre GO e GC não mostrou diferença estatisticamente significante (p=1,0000. CONCLUSÃO: as otites recorrentes na primeira infância não influenciaram no desempenho da habilidade de localização sonora dos pré-escolares deste estudo. Embora sejam dois instrumentos baratos e de fácil aplicação, o questionário e o teste de localização não foram suficientes para diferenciar os dois grupos testados.PURPOSE: to compare the sound localization ability of 40 preschool children with their parents' answers. The questionnaire answered by the parents investigated otitis media (OM episodes and symptoms that indicated the audiological and auditory processing disabilities. METHODS: after applying and analyzing the questionnaire's answers, two groups were formed: OG (with OM and CG (control group. Each group with 20 preschool children, of both genders, was submitted to the sound localization test in five directions (according to Pereira, 1993. RESULTS: comparison between OG and CG did not reveal statistically significant difference (p=1.0000. CONCLUSION: OM episodes during first infancy did not influence the sound localization ability in this preschool children study. Although both used evaluation instruments (questionnaire and sound localization test are cheap and easy to apply they are not sufficient to differ both tested groups.

  9. local

    Directory of Open Access Journals (Sweden)

    Abílio Amiguinho

    2005-01-01

    Full Text Available The process of socio-educational territorialisation in rural contexts is the topic of this text. The theme corresponds to a challenge to address it having as main axis of discussion either the problem of social exclusion or that of local development. The reasons to locate the discussion in this last field of analysis are discussed in the first part of the text. Theoretical and political reasons are there articulated because the question is about projects whose intentions and practices call for the political both in the theoretical debate and in the choices that anticipate intervention. From research conducted for several years, I use contributions that aim at discuss and enlighten how school can be a potential locus of local development. Its identification and recognition as local institution (either because of those that work and live in it or because of those that act in the surrounding context are crucial steps to progressively constitute school as a partner for development. The promotion of the local values and roots, the reconstruction of socio-personal and local identities, the production of sociabilities and the equation and solution of shared problems were the dimensions of a socio-educative intervention, markedly globalising. This scenario, as it is argued, was also, intentionally, one of transformation and of deliberate change of school and of the administration of the educative territoires.

  10. Sound knowledge

    DEFF Research Database (Denmark)

    Kauffmann, Lene Teglhus

    as knowledge based on reflexive practices. I chose ‘health promotion’ as the field for my research as it utilises knowledge produced in several research disciplines, among these both quantitative and qualitative. I mapped out the institutions, actors, events, and documents that constituted the field of health...... of the research is to investigate what is considered to ‘work as evidence’ in health promotion and how the ‘evidence discourse’ influences social practices in policymaking and in research. From investigating knowledge practices in the field of health promotion, I develop the concept of sound knowledge...... result of a rigorous and standardized research method. However, this anthropological analysis shows that evidence and evidence-based is a hegemonic ‘way of knowing’ that sometimes transposes everyday reasoning into an epistemological form. However, the empirical material shows a variety of understandings...

  11. Sound Search Engine Concept

    DEFF Research Database (Denmark)

    2006-01-01

    Sound search is provided by the major search engines, however, indexing is text based, not sound based. We will establish a dedicated sound search services with based on sound feature indexing. The current demo shows the concept of the sound search engine. The first engine will be realased June...

  12. Spatial Hearing with Incongruent Visual or Auditory Room Cues

    Science.gov (United States)

    Gil-Carvajal, Juan C.; Cubick, Jens; Santurette, Sébastien; Dau, Torsten

    2016-11-01

    In day-to-day life, humans usually perceive the location of sound sources as outside their heads. This externalized auditory spatial perception can be reproduced through headphones by recreating the sound pressure generated by the source at the listener’s eardrums. This requires the acoustical features of the recording environment and listener’s anatomy to be recorded at the listener’s ear canals. Although the resulting auditory images can be indistinguishable from real-world sources, their externalization may be less robust when the playback and recording environments differ. Here we tested whether a mismatch between playback and recording room reduces perceived distance, azimuthal direction, and compactness of the auditory image, and whether this is mostly due to incongruent auditory cues or to expectations generated from the visual impression of the room. Perceived distance ratings decreased significantly when collected in a more reverberant environment than the recording room, whereas azimuthal direction and compactness remained room independent. Moreover, modifying visual room-related cues had no effect on these three attributes, while incongruent auditory room-related cues between the recording and playback room did affect distance perception. Consequently, the external perception of virtual sounds depends on the degree of congruency between the acoustical features of the environment and the stimuli.

  13. Emotional cues, emotional signals, and their contrasting effects on listener valence

    DEFF Research Database (Denmark)

    Christensen, Justin

    2015-01-01

    that are mimetic of emotional cues interact in less clear and less cohesive manners with their corresponding haptic signals. For my investigations, subjects listen to samples from the International Affective Digital Sounds Library[2] and selected musical works on speakers in combination with a tactile transducer...... and of benefit to both the sender and the receiver of the signal, otherwise they would cease to have the intended effect of communication. In contrast with signals, animal cues are much more commonly unimodal as they are unintentional by the sender. In my research, I investigate whether subjects exhibit...... are more emotional cues (e.g. sadness or calmness). My hypothesis is that musical and sound stimuli that are mimetic of emotional signals should combine to elicit a stronger response when presented as a multimodal stimulus as opposed to as a unimodal stimulus, whereas musical or sound stimuli...

  14. Cue conflicts in context

    DEFF Research Database (Denmark)

    Boeg Thomsen, Ditte; Poulsen, Mads

    2015-01-01

    When learning their first language, children develop strategies for assigning semantic roles to sentence structures, depending on morphosyntactic cues such as case and word order. Traditionally, comprehension experiments have presented transitive clauses in isolation, and crosslinguistically...... preschoolers. However, object-first clauses may be context-sensitive structures, which are infelicitous in isolation. In a second act-out study we presented OVS clauses in supportive and unsupportive discourse contexts and in isolation and found that five-to-six-year-olds’ OVS comprehension was enhanced...

  15. Acoustic analysis of trill sounds.

    Science.gov (United States)

    Dhananjaya, N; Yegnanarayana, B; Bhaskararao, Peri

    2012-04-01

    In this paper, the acoustic-phonetic characteristics of steady apical trills--trill sounds produced by the periodic vibration of the apex of the tongue--are studied. Signal processing methods, namely, zero-frequency filtering and zero-time liftering of speech signals, are used to analyze the excitation source and the resonance characteristics of the vocal tract system, respectively. Although it is natural to expect the effect of trilling on the resonances of the vocal tract system, it is interesting to note that trilling influences the glottal source of excitation as well. The excitation characteristics derived using zero-frequency filtering of speech signals are glottal epochs, strength of impulses at the glottal epochs, and instantaneous fundamental frequency of the glottal vibration. Analysis based on zero-time liftering of speech signals is used to study the dynamic resonance characteristics of vocal tract system during the production of trill sounds. Qualitative analysis of trill sounds in different vowel contexts, and the acoustic cues that may help spotting trills in continuous speech are discussed.

  16. NASA Space Sounds API

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has released a series of space sounds via sound cloud. We have abstracted away some of the hassle in accessing these sounds, so that developers can play with...

  17. Hear where we are sound, ecology, and sense of place

    CERN Document Server

    Stocker, Michael

    2013-01-01

    Throughout history, hearing and sound perception have been typically framed in the context of how sound conveys information and how that information influences the listener. Hear Where We Are inverts this premise and examines how humans and other hearing animals use sound to establish acoustical relationships with their surroundings. This simple inversion reveals a panoply of possibilities by which we can re-evaluate how hearing animals use, produce, and perceive sound. Nuance in vocalizations become signals of enticement or boundary setting; silence becomes a field ripe in auditory possibilities; predator/prey relationships are infused with acoustic deception, and sounds that have been considered territorial cues become the fabric of cooperative acoustical communities. This inversion also expands the context of sound perception into a larger perspective that centers on biological adaptation within acoustic habitats. Here, the rapid synchronized flight patterns of flocking birds and the tight maneuvering of s...

  18. Assessment of rival males through the use of multiple sensory cues in the fruitfly Drosophila pseudoobscura.

    Directory of Open Access Journals (Sweden)

    Chris P Maguire

    Full Text Available Environments vary stochastically, and animals need to behave in ways that best fit the conditions in which they find themselves. The social environment is particularly variable, and responding appropriately to it can be vital for an animal's success. However, cues of social environment are not always reliable, and animals may need to balance accuracy against the risk of failing to respond if local conditions or interfering signals prevent them detecting a cue. Recent work has shown that many male Drosophila fruit flies respond to the presence of rival males, and that these responses increase their success in acquiring mates and fathering offspring. In Drosophila melanogaster males detect rivals using auditory, tactile and olfactory cues. However, males fail to respond to rivals if any two of these senses are not functioning: a single cue is not enough to produce a response. Here we examined cue use in the detection of rival males in a distantly related Drosophila species, D. pseudoobscura, where auditory, olfactory, tactile and visual cues were manipulated to assess the importance of each sensory cue singly and in combination. In contrast to D. melanogaster, male D. pseudoobscura require intact olfactory and tactile cues to respond to rivals. Visual cues were not important for detecting rival D. pseudoobscura, while results on auditory cues appeared puzzling. This difference in cue use in two species in the same genus suggests that cue use is evolutionarily labile, and may evolve in response to ecological or life history differences between species.

  19. Decision Utility, Incentive Salience, and Cue-Triggered "Wanting"

    Science.gov (United States)

    Berridge, Kent C; Aldridge, J Wayne

    2009-01-01

    This chapter examines brain mechanisms of reward utility operating at particular decision moments in life-moments such as when one encounters an image, sound, scent, or other cue associated in the past with a particular reward or perhaps just when one vividly imagines that cue. Such a cue can often trigger a sudden motivational urge to pursue its reward and sometimes a decision to do so. Drawing on a utility taxonomy that distinguishes among subtypes of reward utility-predicted utility, decision utility, experienced utility, and remembered utility-it is shown how cue-triggered cravings, such as an addict's surrender to relapse, can hang on special transformations by brain mesolimbic systems of one utility subtype, namely, decision utility. The chapter focuses on a particular form of decision utility called incentive salience, a type of "wanting" for rewards that is amplified by brain mesolimbic systems. Sudden peaks of intensity of incentive salience, caused by neurobiological mechanisms, can elevate the decision utility of a particular reward at the moment its cue occurs. An understanding of what happens at such moments leads to a better understanding of the mechanisms at work in decision making in general.

  20. Cue reactivity towards shopping cues in female participants.

    Science.gov (United States)

    Starcke, Katrin; Schlereth, Berenike; Domass, Debora; Schöler, Tobias; Brand, Matthias

    2013-03-01

    Background and aims It is currently under debate whether pathological buying can be considered as a behavioural addiction. Addictions have often been investigated with cue-reactivity paradigms to assess subjective, physiological and neural craving reactions. The current study aims at testing whether cue reactivity towards shopping cues is related to pathological buying tendencies. Methods A sample of 66 non-clinical female participants rated shopping related pictures concerning valence, arousal, and subjective craving. In a subgroup of 26 participants, electrodermal reactions towards those pictures were additionally assessed. Furthermore, all participants were screened concerning pathological buying tendencies and baseline craving for shopping. Results Results indicate a relationship between the subjective ratings of the shopping cues and pathological buying tendencies, even if baseline craving for shopping was controlled for. Electrodermal reactions were partly related to the subjective ratings of the cues. Conclusions Cue reactivity may be a potential correlate of pathological buying tendencies. Thus, pathological buying may be accompanied by craving reactions towards shopping cues. Results support the assumption that pathological buying can be considered as a behavioural addiction. From a methodological point of view, results support the view that the cue-reactivity paradigm is suited for the investigation of craving reactions in pathological buying and future studies should implement this paradigm in clinical samples.

  1. Suppressive competition: how sounds may cheat sight.

    Science.gov (United States)

    Kayser, Christoph; Remedios, Ryan

    2012-02-23

    In this issue of Neuron, Iurilli et al. (2012) demonstrate that auditory cortex activation directly engages local GABAergic circuits in V1 to induce sound-driven hyperpolarizations in layer 2/3 and layer 6 pyramidal neurons. Thereby, sounds can directly suppress V1 activity and visual driven behavior. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Grasp cueing and joint attention.

    Science.gov (United States)

    Tschentscher, Nadja; Fischer, Martin H

    2008-10-01

    We studied how two different hand posture cues affect joint attention in normal observers. Visual targets appeared over lateralized objects, with different delays after centrally presented hand postures. Attention was cued by either hand direction or the congruency between hand aperture and object size. Participants pressed a button when they detected a target. Direction cues alone facilitated target detection following short delays but aperture cues alone were ineffective. In contrast, when hand postures combined direction and aperture cues, aperture congruency effects without directional congruency effects emerged and persisted, but only for power grips. These results suggest that parallel parameter specification makes joint attention mechanisms exquisitely sensitive to the timing and content of contextual cues.

  3. Compound cueing in free recall

    Science.gov (United States)

    Lohnas, Lynn J.; Kahana, Michael J.

    2013-01-01

    According to the retrieved context theory of episodic memory, the cue for recall of an item is a weighted sum of recently activated cognitive states, including previously recalled and studied items as well as their associations. We show that this theory predicts there should be compound cueing in free recall. Specifically, the temporal contiguity effect should be greater when the two most recently recalled items were studied in contiguous list positions. A meta-analysis of published free recall experiments demonstrates evidence for compound cueing in both conditional response probabilities and inter-response times. To help rule out a rehearsal-based account of these compound cueing effects, we conducted an experiment with immediate, delayed and continual-distractor free recall conditions. Consistent with retrieved context theory but not with a rehearsal-based account, compound cueing was present in all conditions, and was not significantly influenced by the presence of interitem distractors. PMID:23957364

  4. The Sound of Science

    Science.gov (United States)

    Merwade, Venkatesh; Eichinger, David; Harriger, Bradley; Doherty, Erin; Habben, Ryan

    2014-01-01

    While the science of sound can be taught by explaining the concept of sound waves and vibrations, the authors of this article focused their efforts on creating a more engaging way to teach the science of sound--through engineering design. In this article they share the experience of teaching sound to third graders through an engineering challenge…

  5. Sounds Exaggerate Visual Shape

    Science.gov (United States)

    Sweeny, Timothy D.; Guzman-Martinez, Emmanuel; Ortega, Laura; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    While perceiving speech, people see mouth shapes that are systematically associated with sounds. In particular, a vertically stretched mouth produces a /woo/ sound, whereas a horizontally stretched mouth produces a /wee/ sound. We demonstrate that hearing these speech sounds alters how we see aspect ratio, a basic visual feature that contributes…

  6. Making Sound Connections

    Science.gov (United States)

    Deal, Walter F., III

    2007-01-01

    Sound provides and offers amazing insights into the world. Sound waves may be defined as mechanical energy that moves through air or other medium as a longitudinal wave and consists of pressure fluctuations. Humans and animals alike use sound as a means of communication and a tool for survival. Mammals, such as bats, use ultrasonic sound waves to…

  7. No two cues are alike: Depth of learning during infancy is dependent on what orients attention.

    Science.gov (United States)

    Wu, Rachel; Kirkham, Natasha Z

    2010-10-01

    Human infants develop a variety of attentional mechanisms that allow them to extract relevant information from a cluttered multimodal world. We know that both social and nonsocial cues shift infants' attention, but not how these cues differentially affect learning of multimodal events. Experiment 1 used social cues to direct 8- and 4-month-olds' attention to two audiovisual events (i.e., animations of a cat or dog accompanied by particular sounds) while identical distractor events played in another location. Experiment 2 directed 8-month-olds' attention with colorful flashes to the same events. Experiment 3 measured baseline learning without attention cues both with the familiarization and test trials (no cue condition) and with only the test trials (test control condition). The 8-month-olds exposed to social cues showed specific learning of audiovisual events. The 4-month-olds displayed only general spatial learning from social cues, suggesting that specific learning of audiovisual events from social cues may be a function of experience. Infants cued with the colorful flashes looked indiscriminately to both cued locations during test (similar to the 4-month-olds learning from social cues) despite attending for equal duration to the training trials as the 8-month-olds with the social cues. Results from Experiment 3 indicated that the learning effects in Experiments 1 and 2 resulted from exposure to the different cues and multimodal events. We discuss these findings in terms of the perceptual differences and relevance of the cues. Copyright 2010 Elsevier Inc. All rights reserved.

  8. Modulation frequency as a cue for auditory speed perception.

    Science.gov (United States)

    Senna, Irene; Parise, Cesare V; Ernst, Marc O

    2017-07-12

    Unlike vision, the mechanisms underlying auditory motion perception are poorly understood. Here we describe an auditory motion illusion revealing a novel cue to auditory speed perception: the temporal frequency of amplitude modulation (AM-frequency), typical for rattling sounds. Naturally, corrugated objects sliding across each other generate rattling sounds whose AM-frequency tends to directly correlate with speed. We found that AM-frequency modulates auditory speed perception in a highly systematic fashion: moving sounds with higher AM-frequency are perceived as moving faster than sounds with lower AM-frequency. Even more interestingly, sounds with higher AM-frequency also induce stronger motion aftereffects. This reveals the existence of specialized neural mechanisms for auditory motion perception, which are sensitive to AM-frequency. Thus, in spatial hearing, the brain successfully capitalizes on the AM-frequency of rattling sounds to estimate the speed of moving objects. This tightly parallels previous findings in motion vision, where spatio-temporal frequency of moving displays systematically affects both speed perception and the magnitude of the motion aftereffects. Such an analogy with vision suggests that motion detection may rely on canonical computations, with similar neural mechanisms shared across the different modalities. © 2017 The Author(s).

  9. Little Sounds

    Directory of Open Access Journals (Sweden)

    Baker M. Bani-Khair

    2017-10-01

    Full Text Available The Spider and the Fly   You little spider, To death you aspire... Or seeking a web wider, To death all walking, No escape you all fighters… Weak and fragile in shape and might, Whatever you see in the horizon, That is destiny whatever sight. And tomorrow the spring comes, And the flowers bloom, And the grasshopper leaps high, And the frogs happily cry, And the flies smile nearby, To that end, The spider has a plot, To catch the flies by his net, A mosquito has fallen down in his net, Begging him to set her free, Out of that prison, To her freedom she aspires, Begging...Imploring...crying,  That is all what she requires, But the spider vows never let her free, His power he admires, Turning blind to light, And with his teeth he shall bite, Leaving her in desperate might, Unable to move from site to site, Tied up with strings in white, Wrapped up like a dead man, Waiting for his grave at night,   The mosquito says, Oh little spider, A stronger you are than me in power, But listen to my words before death hour, Today is mine and tomorrow is yours, No escape from death... Whatever the color of your flower…     Little sounds The Ant The ant is a little creature with a ferocious soul, Looking and looking for more and more, You can simply crush it like dead mold, Or you can simply leave it alone, I wonder how strong and strong they are! Working day and night in a small hole, Their motto is work or whatever you call… A big boon they have and joy in fall, Because they found what they store, A lesson to learn and memorize all in all, Work is something that you should not ignore!   The butterfly: I’m the butterfly Beautiful like a blue clear sky, Or sometimes look like snow, Different in colors, shapes and might, But something to know that we always die, So fragile, weak and thin, Lighter than a glimpse and delicate as light, Something to know for sure… Whatever you have in life and all these fields, You are not happier than a butterfly

  10. Electromagnetic sounding of the Earth's interior

    CERN Document Server

    Spichak, Viacheslav V

    2015-01-01

    Electromagnetic Sounding of the Earth's Interior 2nd edition provides a comprehensive up-to-date collection of contributions, covering methodological, computational and practical aspects of Electromagnetic sounding of the Earth by different techniques at global, regional and local scales. Moreover, it contains new developments such as the concept of self-consistent tasks of geophysics and , 3-D interpretation of the TEM sounding which, so far, have not all been covered by one book. Electromagnetic Sounding of the Earth's Interior 2nd edition consists of three parts: I- EM sounding methods, II- Forward modelling and inversion techniques, and III - Data processing, analysis, modelling and interpretation. The new edition includes brand new chapters on Pulse and frequency electromagnetic sounding for hydrocarbon offshore exploration. Additionally all other chapters have been extensively updated to include new developments. Presents recently developed methodological findings of the earth's study, including seism...

  11. Techniques and applications for binaural sound manipulation in human-machine interfaces

    Science.gov (United States)

    Begault, Durand R.; Wenzel, Elizabeth M.

    1992-01-01

    The implementation of binaural sound to speech and auditory sound cues (auditory icons) is addressed from both an applications and technical standpoint. Techniques overviewed include processing by means of filtering with head-related transfer functions. Application to advanced cockpit human interface systems is discussed, although the techniques are extendable to any human-machine interface. Research issues pertaining to three-dimensional sound displays under investigation at the Aerospace Human Factors Division at NASA Ames Research Center are described.

  12. Global Repetition Influences Contextual Cueing

    Science.gov (United States)

    Zang, Xuelian; Zinchenko, Artyom; Jia, Lina; Li, Hong

    2018-01-01

    Our visual system has a striking ability to improve visual search based on the learning of repeated ambient regularities, an effect named contextual cueing. Whereas most of the previous studies investigated contextual cueing effect with the same number of repeated and non-repeated search displays per block, the current study focused on whether a global repetition frequency formed by different presentation ratios between the repeated and non-repeated configurations influence contextual cueing effect. Specifically, the number of repeated and non-repeated displays presented in each block was manipulated: 12:12, 20:4, 4:20, and 4:4 in Experiments 1–4, respectively. The results revealed a significant contextual cueing effect when the global repetition frequency is high (≥1:1 ratio) in Experiments 1, 2, and 4, given that processing of repeated displays was expedited relative to non-repeated displays. Nevertheless, the contextual cueing effect reduced to a non-significant level when the repetition frequency reduced to 4:20 in Experiment 3. These results suggested that the presentation frequency of repeated relative to the non-repeated displays could influence the strength of contextual cueing. In other words, global repetition statistics could be a crucial factor to mediate contextual cueing effect. PMID:29636716

  13. Early visual deprivation prompts the use of body-centered frames of reference for auditory localization.

    Science.gov (United States)

    Vercillo, Tiziana; Tonelli, Alessia; Gori, Monica

    2018-01-01

    The effects of early visual deprivation on auditory spatial processing are controversial. Results from recent psychophysical studies show that people who were born blind have a spatial impairment in localizing sound sources within specific auditory settings, while previous psychophysical studies revealed enhanced auditory spatial abilities in early blind compared to sighted individuals. An explanation of why an auditory spatial deficit is sometimes observed within blind populations and its task-dependency remains to be clarified. We investigated auditory spatial perception in early blind adults and demonstrated that the deficit derives from blind individual's reduced ability to remap sound locations using an external frame of reference. We found that performance in blind population was severely impaired when they were required to localize brief auditory stimuli with respect to external acoustic landmarks (external reference frame) or when they had to reproduce the spatial distance between two sounds. However, they performed similarly to sighted controls when had to localize sounds with respect to their own hand (body-centered reference frame), or to judge the distances of sounds from their finger. These results suggest that early visual deprivation and the lack of visual contextual cues during the critical period induce a preference for body-centered over external spatial auditory representations. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Matching cue size and task properties in exogenous attention.

    Science.gov (United States)

    Burnett, Katherine E; d'Avossa, Giovanni; Sapir, Ayelet

    2013-01-01

    Exogenous attention is an involuntary, reflexive orienting response that results in enhanced processing at the attended location. The standard view is that this enhancement generalizes across visual properties of a stimulus. We test whether the size of an exogenous cue sets the attentional field and whether this leads to different effects on stimuli with different visual properties. In a dual task with a random-dot kinematogram (RDK) in each quadrant of the screen, participants discriminated the direction of moving dots in one RDK and localized one red dot. Precues were uninformative and consisted of either a large or a small luminance-change frame. The motion discrimination task showed attentional effects following both large and small exogenous cues. The red dot probe localization task showed attentional effects following a small cue, but not a large cue. Two additional experiments showed that the different effects on localization were not due to reduced spatial uncertainty or suppression of RDK dots in the surround. These results indicate that the effects of exogenous attention depend on the size of the cue and the properties of the task, suggesting the involvement of receptive fields with different sizes in different tasks. These attentional effects are likely to be driven by bottom-up mechanisms in early visual areas.

  15. Associative cueing of attention through implicit feature-location binding.

    Science.gov (United States)

    Girardi, Giovanna; Nico, Daniele

    2017-09-01

    In order to assess associative learning between two task-irrelevant features in cueing spatial attention, we devised a task in which participants have to make an identity comparison between two sequential visual stimuli. Unbeknownst to them, location of the second stimulus could be predicted by the colour of the first or a concurrent sound. Albeit unnecessary to perform the identity-matching judgment the predictive features thus provided an arbitrary association favouring the spatial anticipation of the second stimulus. A significant advantage was found with faster responses at predicted compared to non-predicted locations. Results clearly demonstrated an associative cueing of attention via a second-order arbitrary feature/location association but with a substantial discrepancy depending on the sensory modality of the predictive feature. With colour as predictive feature, significant advantages emerged only after the completion of three blocks of trials. On the contrary, sound affected responses from the first block of trials and significant advantages were manifest from the beginning of the second. The possible mechanisms underlying the associative cueing of attention in both conditions are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Individual differences in using geometric and featural cues to maintain spatial orientation: cue quantity and cue ambiguity are more important than cue type.

    Science.gov (United States)

    Kelly, Jonathan W; McNamara, Timothy P; Bodenheimer, Bobby; Carr, Thomas H; Rieser, John J

    2009-02-01

    Two experiments explored the role of environmental cues in maintaining spatial orientation (sense of self-location and direction) during locomotion. Of particular interest was the importance of geometric cues (provided by environmental surfaces) and featural cues (nongeometric properties provided by striped walls) in maintaining spatial orientation. Participants performed a spatial updating task within virtual environments containing geometric or featural cues that were ambiguous or unambiguous indicators of self-location and direction. Cue type (geometric or featural) did not affect performance, but the number and ambiguity of environmental cues did. Gender differences, interpreted as a proxy for individual differences in spatial ability and/or experience, highlight the interaction between cue quantity and ambiguity. When environmental cues were ambiguous, men stayed oriented with either one or two cues, whereas women stayed oriented only with two. When environmental cues were unambiguous, women stayed oriented with one cue.

  17. Sensory modality of smoking cues modulates neural cue reactivity.

    Science.gov (United States)

    Yalachkov, Yavor; Kaiser, Jochen; Görres, Andreas; Seehaus, Arne; Naumer, Marcus J

    2013-01-01

    Behavioral experiments have demonstrated that the sensory modality of presentation modulates drug cue reactivity. The present study on nicotine addiction tested whether neural responses to smoking cues are modulated by the sensory modality of stimulus presentation. We measured brain activation using functional magnetic resonance imaging (fMRI) in 15 smokers and 15 nonsmokers while they viewed images of smoking paraphernalia and control objects and while they touched the same objects without seeing them. Haptically presented, smoking-related stimuli induced more pronounced neural cue reactivity than visual cues in the left dorsal striatum in smokers compared to nonsmokers. The severity of nicotine dependence correlated positively with the preference for haptically explored smoking cues in the left inferior parietal lobule/somatosensory cortex, right fusiform gyrus/inferior temporal cortex/cerebellum, hippocampus/parahippocampal gyrus, posterior cingulate cortex, and supplementary motor area. These observations are in line with the hypothesized role of the dorsal striatum for the expression of drug habits and the well-established concept of drug-related automatized schemata, since haptic perception is more closely linked to the corresponding object-specific action pattern than visual perception. Moreover, our findings demonstrate that with the growing severity of nicotine dependence, brain regions involved in object perception, memory, self-processing, and motor control exhibit an increasing preference for haptic over visual smoking cues. This difference was not found for control stimuli. Considering the sensory modality of the presented cues could serve to develop more reliable fMRI-specific biomarkers, more ecologically valid experimental designs, and more effective cue-exposure therapies of addiction.

  18. FREQUENCY COMPONENT EXTRACTION OF HEARTBEAT CUES WITH SHORT TIME FOURIER TRANSFORM (STFT

    Directory of Open Access Journals (Sweden)

    Sumarna Sumarna

    2017-01-01

      Electro-acoustic human heartbeat detector have been made with the main parts : (a stetoscope (piece chest, (b mic condenser, (c transistor amplifier, and (d cues analysis program with MATLAB. The frequency components that contained in heartbeat. cues have also been extracted with Short Time Fourier Transform (STFT from 9 volunteers. The results of the analysis showed that heart rate appeared in every cue frequency spectrum with their harmony. The steps of the research were including detector instrument design, test and instrument repair, cues heartbeat recording with Sound Forge 10 program and stored in wav file ; cues breaking at the start and the end, and extraction/cues analysis using MATLAB. The MATLAB program included filter (bandpass filter with bandwidth between 0.01 – 110 Hz, cues breaking with hamming window and every part was calculated using Fourier Transform (STFT mechanism and the result were shown in frequency spectrum graph.   Keywords: frequency components extraction, heartbeat cues, Short Time Fourier Transform

  19. Global cue inconsistency diminishes learning of cue validity

    Directory of Open Access Journals (Sweden)

    Tony Wang

    2016-11-01

    Full Text Available We present a novel two-stage probabilistic learning task that examines the participants’ ability to learn and utilize valid cues across several levels of probabilistic feedback. In the first stage, participants sample from one of three cues that gives predictive information about the outcome of the second stage. Participants are rewarded for correct prediction of the outcome in stage two. Only one of the three cues gives valid predictive information and thus participants can maximise their reward by learning to sample from the valid cue. The validity of this predictive information, however, is reinforced across several levels of probabilistic feedback. A second manipulation involved changing the consistency of the predictive information in stage one and the outcome in stage two. The results show that participants, with higher probabilistic feedback, learned to utilise the valid cue. In inconsistent task conditions, however, participants were significantly less successful in utilising higher validity cues. We interpret this result as implying that learning in probabilistic categorization is based on developing a representation of the task that allows for goal-directed action.

  20. Sound of mind : electrophysiological and behavioural evidence for the role of context, variation and informativity in human speech processing

    NARCIS (Netherlands)

    Nixon, Jessie Sophia

    2014-01-01

    Spoken communication involves transmission of a message which takes physical form in acoustic waves. Within any given language, acoustic cues pattern in language-specific ways along language-specific acoustic dimensions to create speech sound contrasts. These cues are utilized by listeners to

  1. Phonetic Category Cues in Adult-Directed Speech: Evidence from Three Languages with Distinct Vowel Characteristics

    Science.gov (United States)

    Pons, Ferran; Biesanz, Jeremy C.; Kajikawa, Sachiyo; Fais, Laurel; Narayan, Chandan R.; Amano, Shigeaki; Werker, Janet F.

    2012-01-01

    Using an artificial language learning manipulation, Maye, Werker, and Gerken (2002) demonstrated that infants' speech sound categories change as a function of the distributional properties of the input. In a recent study, Werker et al. (2007) showed that Infant-directed Speech (IDS) input contains reliable acoustic cues that support distributional…

  2. Gaze Cueing by Pareidolia Faces

    Directory of Open Access Journals (Sweden)

    Kohske Takahashi

    2013-12-01

    Full Text Available Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon. While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  3. Gaze cueing by pareidolia faces.

    Science.gov (United States)

    Takahashi, Kohske; Watanabe, Katsumi

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cueing effect was comparable between the face-like objects and a cartoon face. However, the cueing effect was eliminated when the observer did not perceive the objects as faces. These results demonstrated that pareidolia faces do more than give the impression of the presence of faces; indeed, they trigger an additional face-specific attentional process.

  4. Evaluation of multimodal ground cues

    DEFF Research Database (Denmark)

    Nordahl, Rolf; Lecuyer, Anatole; Serafin, Stefania

    2012-01-01

    This chapter presents an array of results on the perception of ground surfaces via multiple sensory modalities,with special attention to non visual perceptual cues, notably those arising from audition and haptics, as well as interactions between them. It also reviews approaches to combining...... synthetic multimodal cues, from vision, haptics, and audition, in order to realize virtual experiences of walking on simulated ground surfaces or other features....

  5. Visual form Cues, Biological Motions, Auditory Cues, and Even Olfactory Cues Interact to Affect Visual Sex Discriminations

    OpenAIRE

    Rick Van Der Zwan; Anna Brooks; Duncan Blair; Coralia Machatch; Graeme Hacker

    2011-01-01

    Johnson and Tassinary (2005) proposed that visually perceived sex is signalled by structural or form cues. They suggested also that biological motion cues signal sex, but do so indirectly. We previously have shown that auditory cues can mediate visual sex perceptions (van der Zwan et al., 2009). Here we demonstrate that structural cues to body shape are alone sufficient for visual sex discriminations but that biological motion cues alone are not. Interestingly, biological motions can resolve ...

  6. Sound wave transmission (image)

    Science.gov (United States)

    When sounds waves reach the ear, they are translated into nerve impulses. These impulses then travel to the brain where they are interpreted by the brain as sound. The hearing mechanisms within the inner ear, can ...

  7. Making fictions sound real

    DEFF Research Database (Denmark)

    Langkjær, Birger

    2010-01-01

    This article examines the role that sound plays in making fictions perceptually real to film audiences, whether these fictions are realist or non-realist in content and narrative form. I will argue that some aspects of film sound practices and the kind of experiences they trigger are related...... to basic rules of human perception, whereas others are more properly explained in relation to how aesthetic devices, including sound, are used to characterise the fiction and thereby make it perceptually real to its audience. Finally, I will argue that not all genres can be defined by a simple taxonomy...... of sounds. Apart from an account of the kinds of sounds that typically appear in a specific genre, a genre analysis of sound may also benefit from a functionalist approach that focuses on how sounds can make both realist and non-realist aspects of genres sound real to audiences....

  8. Principles of underwater sound

    National Research Council Canada - National Science Library

    Urick, Robert J

    1983-01-01

    ... the immediately useful help they need for sonar problem solving. Its coverage is broad-ranging from the basic concepts of sound in the sea to making performance predictions in such applications as depth sounding, fish finding, and submarine detection...

  9. An Antropologist of Sound

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2015-01-01

    PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology.......PROFESSOR PORTRAIT: Sanne Krogh Groth met Holger Schulze, newly appointed professor in Musicology at the Department for Arts and Cultural Studies, University of Copenhagen, to a talk about anthropology of sound, sound studies, musical canons and ideology....

  10. Broadcast sound technology

    CERN Document Server

    Talbot-Smith, Michael

    1990-01-01

    Broadcast Sound Technology provides an explanation of the underlying principles of modern audio technology. Organized into 21 chapters, the book first describes the basic sound; behavior of sound waves; aspects of hearing, harming, and charming the ear; room acoustics; reverberation; microphones; phantom power; loudspeakers; basic stereo; and monitoring of audio signal. Subsequent chapters explore the processing of audio signal, sockets, sound desks, and digital audio. Analogue and digital tape recording and reproduction, as well as noise reduction, are also explained.

  11. Propagation of sound

    DEFF Research Database (Denmark)

    Wahlberg, Magnus; Larsen, Ole Næsbye

    2017-01-01

    properties can be modified by sound absorption, refraction, and interference from multi paths caused by reflections.The path from the source to the receiver may be bent due to refraction. Besides geometrical attenuation, the ground effect and turbulence are the most important mechanisms to influence...... communication sounds for airborne acoustics and bottom and surface effects for underwater sounds. Refraction becomes very important close to shadow zones. For echolocation signals, geometric attenuation and sound absorption have the largest effects on the signals....

  12. The Britannica Guide to Sound and Light

    CERN Document Server

    2010-01-01

    Audio and visual cues facilitate some of our most powerful sensory experiences and embed themselves deeply into our memories and subconscious. Sound and light waves interact with our ears and eyes?our biological interpreters?creating a textural experience and relationship with the world around us. This well-researched volume explores the science behind acoustics and optics and the broad application they have to everything from listening to music and watching television to ultrasonic and laser technologies that are crucial to the medical field.

  13. Compass cues used by a nocturnal bull ant, Myrmecia midas.

    Science.gov (United States)

    Freas, Cody A; Narendra, Ajay; Cheng, Ken

    2017-05-01

    Ants use both terrestrial landmarks and celestial cues to navigate to and from their nest location. These cues persist even as light levels drop during the twilight/night. Here, we determined the compass cues used by a nocturnal bull ant, Myrmecia midas , in which the majority of individuals begin foraging during the evening twilight period. Myrmecia midas foragers with vectors of ≤5   m when displaced to unfamiliar locations did not follow the home vector, but instead showed random heading directions. Foragers with larger home vectors (≥10   m) oriented towards the fictive nest, indicating a possible increase in cue strength with vector length. When the ants were displaced locally to create a conflict between the home direction indicated by the path integrator and terrestrial landmarks, foragers oriented using landmark information exclusively and ignored any accumulated home vector regardless of vector length. When the visual landmarks at the local displacement site were blocked, foragers were unable to orient to the nest direction and their heading directions were randomly distributed. Myrmecia midas ants typically nest at the base of the tree and some individuals forage on the same tree. Foragers collected on the nest tree during evening twilight were unable to orient towards the nest after small lateral displacements away from the nest. This suggests the possibility of high tree fidelity and an inability to extrapolate landmark compass cues from information collected on the tree and at the nest site to close displacement sites. © 2017. Published by The Company of Biologists Ltd.

  14. Sparse representation of Gravitational Sound

    Science.gov (United States)

    Rebollo-Neira, Laura; Plastino, A.

    2018-03-01

    Gravitational Sound clips produced by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Massachusetts Institute of Technology (MIT) are considered within the particular context of data reduction. We advance a procedure to this effect and show that these types of signals can be approximated with high quality using significantly fewer elementary components than those required within the standard orthogonal basis framework. Furthermore, a local measure sparsity is shown to render meaningful information about the variation of a signal along time, by generating a set of local sparsity values which is much smaller than the dimension of the signal. This point is further illustrated by recourse to a more complex signal, generated by Milde Science Communication to divulge Gravitational Sound in the form of a ring tone.

  15. Abnormal sound detection device

    International Nuclear Information System (INIS)

    Yamada, Izumi; Matsui, Yuji.

    1995-01-01

    Only components synchronized with rotation of pumps are sampled from detected acoustic sounds, to judge the presence or absence of abnormality based on the magnitude of the synchronized components. A synchronized component sampling means can remove resonance sounds and other acoustic sounds generated at a synchronously with the rotation based on the knowledge that generated acoustic components in a normal state are a sort of resonance sounds and are not precisely synchronized with the number of rotation. On the other hand, abnormal sounds of a rotating body are often caused by compulsory force accompanying the rotation as a generation source, and the abnormal sounds can be detected by extracting only the rotation-synchronized components. Since components of normal acoustic sounds generated at present are discriminated from the detected sounds, reduction of the abnormal sounds due to a signal processing can be avoided and, as a result, abnormal sound detection sensitivity can be improved. Further, since it is adapted to discriminate the occurrence of the abnormal sound from the actually detected sounds, the other frequency components which are forecast but not generated actually are not removed, so that it is further effective for the improvement of detection sensitivity. (N.H.)

  16. Modelling Hyperboloid Sound Scattering

    DEFF Research Database (Denmark)

    Burry, Jane; Davis, Daniel; Peters, Brady

    2011-01-01

    The Responsive Acoustic Surfaces workshop project described here sought new understandings about the interaction between geometry and sound in the arena of sound scattering. This paper reports on the challenges associated with modelling, simulating, fabricating and measuring this phenomenon using...... both physical and digital models at three distinct scales. The results suggest hyperboloid geometry, while difficult to fabricate, facilitates sound scattering....

  17. Errors in Sounding of the Atmosphere Using Broadband Emission Radiometry (SABER) Kinetic Temperature Caused by Non-Local Thermodynamic Equilibrium Model Parameters

    Science.gov (United States)

    Garcia-Comas, Maya; Lopez-Puertas, M.; Funke, B.; Bermejo-Pantaleon, D.; Marshall, Benjamin T.; Mertens, Christopher J.; Remsberg, Ellis E.; Mlynczak, Martin G.; Gordley, L. L.; Russell, James M.

    2008-01-01

    The vast set of near global and continuous atmospheric measurements made by the SABER instrument since 2002, including daytime and nighttime kinetic temperature (T(sub k)) from 20 to 105 km, is available to the scientific community. The temperature is retrieved from SABER measurements of the atmospheric 15 micron CO2 limb emission. This emission separates from local thermodynamic equilibrium (LTE) conditions in the rarefied mesosphere and thermosphere, making it necessary to consider the CO2 vibrational state non-LTE populations in the retrieval algorithm above 70 km. Those populations depend on kinetic parameters describing the rate at which energy exchange between atmospheric molecules take place, but some of these collisional rates are not well known. We consider current uncertainties in the rates of quenching of CO2 (v2 ) by N2 , O2 and O, and the CO2 (v2 ) vibrational-vibrational exchange to estimate their impact on SABER T(sub k) for different atmospheric conditions. The T(sub k) is more sensitive to the uncertainty in the latter two and their effects depend on altitude. The T(sub k) combined systematic error due to non-LTE kinetic parameters does not exceed +/- 1.5 K below 95 km and +/- 4-5 K at 100 km for most latitudes and seasons (except for polar summer) if the Tk profile does not have pronounced vertical structure. The error is +/- 3 K at 80 km, +/- 6 K at 84 km and +/- 18 K at 100 km under the less favourable polar summer conditions. For strong temperature inversion layers, the errors reach +/- 3 K at 82 km and +/- 8 K at 90 km. This particularly affects tide amplitude estimates, with errors of up to +/- 3 K.

  18. Analysis of environmental sounds

    Science.gov (United States)

    Lee, Keansub

    consumer videos in conjunction with user studies. We model the soundtrack of each video, regardless of its original duration, as a fixed-sized clip-level summary feature. For each concept, an SVM-based classifier is trained according to three distance measures (Kullback-Leibler, Bhattacharyya, and Mahalanobis distance). Detecting the time of occurrence of a local object (for instance, a cheering sound) embedded in a longer soundtrack is useful and important for applications such as search and retrieval in consumer video archives. We finally present a Markov-model based clustering algorithm able to identify and segment consistent sets of temporal frames into regions associated with different ground-truth labels, and at the same time to exclude a set of uninformative frames shared in common from all clips. The labels are provided at the clip level, so this refinement of the time axis represents a variant of Multiple-Instance Learning (MIL). Quantitative evaluation shows that the performance of our proposed approaches tested on the 60h personal audio archives or 1900 YouTube video clips is significantly better than existing algorithms for detecting these useful concepts in real-world personal audio recordings.

  19. The influence of ski helmets on sound perception and sound localisation on the ski slope

    Directory of Open Access Journals (Sweden)

    Lana Ružić

    2015-04-01

    Full Text Available Objectives: The aim of the study was to investigate whether a ski helmet interferes with the sound localization and the time of sound perception in the frontal plane. Material and Methods: Twenty-three participants (age 30.7±10.2 were tested on the slope in 2 conditions, with and without wearing the ski helmet, by 6 different spatially distributed sound stimuli per each condition. Each of the subjects had to react when hearing the sound as soon as possible and to signalize the correct side of the sound arrival. Results: The results showed a significant difference in the ability to localize the specific ski sounds; 72.5±15.6% of correct answers without a helmet vs. 61.3±16.2% with a helmet (p < 0.01. However, the performance on this test did not depend on whether they were used to wearing a helmet (p = 0.89. In identifying the timing, at which the sound was firstly perceived, the results were also in favor of the subjects not wearing a helmet. The subjects reported hearing the ski sound clues at 73.4±5.56 m without a helmet vs. 60.29±6.34 m with a helmet (p < 0.001. In that case the results did depend on previously used helmets (p < 0.05, meaning that that regular usage of helmets might help to diminish the attenuation of the sound identification that occurs because of the helmets. Conclusions: Ski helmets might limit the ability of a skier to localize the direction of the sounds of danger and might interfere with the moment, in which the sound is firstly heard.

  20. 78 FR 13869 - Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy...

    Science.gov (United States)

    2013-03-01

    ...-123-LNG; 12-128-NG; 12-148-NG; 12- 158-NG] Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; Puget Sound Energy, Inc.; CE FLNG, LLC; Consolidated...-NG Puget Sound Energy, Inc Order granting long- term authority to import/export natural gas from/to...

  1. The natural horn as an efficient sound radiating system ...

    African Journals Online (AJOL)

    Results obtained showed that the locally made horn are efficient sound radiating systems and are therefore excellent for sound production in local musical renditions. These findings, in addition to the portability and low cost of the horns qualify them to be highly recommended for use in music making and for other purposes ...

  2. Classification of Real and Imagined Sounds in Early Visual Cortex

    Directory of Open Access Journals (Sweden)

    Petra Vetter

    2011-10-01

    Full Text Available Early visual cortex has been thought to be mainly involved in the detection of low-level visual features. Here we show that complex natural sounds can be decoded from early visual cortex activity, in the absence of visual stimulation and both when sounds are actually displayed and when they are merely imagined. Blindfolded subjects listened to three complex natural sounds (bird singing, people talking, traffic noise; Exp. 1 or received word cues (“forest”, “people”, “traffic”; Exp 2 to imagine the associated scene. fMRI BOLD activation patterns from retinotopically defined early visual areas were fed into a multivariate pattern classification algorithm (a linear support vector machine. Actual sounds were discriminated above chance in V2 and V3 and imagined sounds were decoded in V1. Also cross-classification, ie, training the classifier to real sounds and testing it to imagined sounds and vice versa, was successful. Two further experiments showed that an orthogonal working memory task does not interfere with sound classification in early visual cortex (Exp. 3, however, an orthogonal visuo-spatial imagery task does (Exp. 4. These results demonstrate that early visual cortex activity contains content-specific information from hearing and from imagery, challenging the view of a strict modality-specific function of early visual cortex.

  3. Binaural Processing of Multiple Sound Sources

    Science.gov (United States)

    2016-08-18

    AFRL-AFOSR-VA-TR-2016-0298 Binaural Processing of Multiple Sound Sources William Yost ARIZONA STATE UNIVERSITY 660 S MILL AVE STE 312 TEMPE, AZ 85281...18-08-2016 2. REPORT TYPE Final Performance 3. DATES COVERED (From - To) 15 Jul 2012 to 14 Jul 2016 4. TITLE AND SUBTITLE Binaural Processing of...three topics cited above are entirely within the scope of the AFOSR grant. 15. SUBJECT TERMS Binaural hearing, Sound Localization, Interaural signal

  4. Optimal Prediction of Moving Sound Source Direction in the Owl.

    Directory of Open Access Journals (Sweden)

    Weston Cox

    2015-07-01

    Full Text Available Capturing nature's statistical structure in behavioral responses is at the core of the ability to function adaptively in the environment. Bayesian statistical inference describes how sensory and prior information can be combined optimally to guide behavior. An outstanding open question of how neural coding supports Bayesian inference includes how sensory cues are optimally integrated over time. Here we address what neural response properties allow a neural system to perform Bayesian prediction, i.e., predicting where a source will be in the near future given sensory information and prior assumptions. The work here shows that the population vector decoder will perform Bayesian prediction when the receptive fields of the neurons encode the target dynamics with shifting receptive fields. We test the model using the system that underlies sound localization in barn owls. Neurons in the owl's midbrain show shifting receptive fields for moving sources that are consistent with the predictions of the model. We predict that neural populations can be specialized to represent the statistics of dynamic stimuli to allow for a vector read-out of Bayes-optimal predictions.

  5. Externalization versus Internalization of Sound in Normal-hearing and Hearing-impaired Listeners

    DEFF Research Database (Denmark)

    Ohl, Björn; Laugesen, Søren; Buchholz, Jörg

    2010-01-01

    The externalization of sound, i. e. the perception of auditory events as being located outside of the head, is a natural phenomenon for normalhearing listeners, when perceiving sound coming from a distant physical sound source. It is potentially useful for hearing in background noise......, but the relevant cues might be distorted by a hearing impairment and also by the processing of the incoming sound through hearing aids. In this project, two intuitive tests in natural real-life surroundings were developed, which capture the limits of the perception of externalization. For this purpose...

  6. Gaze Cueing by Pareidolia Faces

    OpenAIRE

    Kohske Takahashi; Katsumi Watanabe

    2013-01-01

    Visual images that are not faces are sometimes perceived as faces (the pareidolia phenomenon). While the pareidolia phenomenon provides people with a strong impression that a face is present, it is unclear how deeply pareidolia faces are processed as faces. In the present study, we examined whether a shift in spatial attention would be produced by gaze cueing of face-like objects. A robust cueing effect was observed when the face-like objects were perceived as faces. The magnitude of the cuei...

  7. Sound a very short introduction

    CERN Document Server

    Goldsmith, Mike

    2015-01-01

    Sound is integral to how we experience the world, in the form of noise as well as music. But what is sound? What is the physical basis of pitch and harmony? And how are sound waves exploited in musical instruments? Sound: A Very Short Introduction looks at the science of sound and the behaviour of sound waves with their different frequencies. It also explores sound in different contexts, covering the audible and inaudible, sound underground and underwater, acoustic and electronic sound, and hearing in humans and animals. It concludes with the problem of sound out of place—noise and its reduction.

  8. Developmental change in children's sensitivity to sound symbolism.

    Science.gov (United States)

    Tzeng, Christina Y; Nygaard, Lynne C; Namy, Laura L

    2017-08-01

    The current study examined developmental change in children's sensitivity to sound symbolism. Three-, five-, and seven-year-old children heard sound symbolic novel words and foreign words meaning round and pointy and chose which of two pictures (one round and one pointy) best corresponded to each word they heard. Task performance varied as a function of both word type and age group such that accuracy was greater for novel words than for foreign words, and task performance increased with age for both word types. For novel words, children in all age groups reliably chose the correct corresponding picture. For foreign words, 3-year-olds showed chance performance, whereas 5- and 7-year-olds showed reliably above-chance performance. Results suggest increased sensitivity to sound symbolic cues with development and imply that although sensitivity to sound symbolism may be available early and facilitate children's word-referent mappings, sensitivity to subtler sound symbolic cues requires greater language experience. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. The contribution of dynamic visual cues to audiovisual speech perception.

    Science.gov (United States)

    Jaekl, Philip; Pesquita, Ana; Alsius, Agnes; Munhall, Kevin; Soto-Faraco, Salvador

    2015-08-01

    Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Sound Insulation between Dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2011-01-01

    Regulatory sound insulation requirements for dwellings exist in more than 30 countries in Europe. In some countries, requirements have existed since the 1950s. Findings from comparative studies show that sound insulation descriptors and requirements represent a high degree of diversity...... and initiate – where needed – improvement of sound insulation of new and existing dwellings in Europe to the benefit of the inhabitants and the society. A European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs...... 2009-2013. The main objectives of TU0901 are to prepare proposals for harmonized sound insulation descriptors and for a European sound classification scheme with a number of quality classes for dwellings. Findings from the studies provide input for the discussions in COST TU0901. Data collected from 24...

  11. The velocity of sound

    International Nuclear Information System (INIS)

    Beyer, R.T.

    1985-01-01

    The paper reviews the work carried out on the velocity of sound in liquid alkali metals. The experimental methods to determine the velocity measurements are described. Tables are presented of reported data on the velocity of sound in lithium, sodium, potassium, rubidium and caesium. A formula is given for alkali metals, in which the sound velocity is a function of shear viscosity, atomic mass and atomic volume. (U.K.)

  12. Michael Jackson's Sound Stages

    OpenAIRE

    Morten Michelsen

    2012-01-01

    In order to discuss analytically spatial aspects of recorded sound William Moylan’s concept of ‘sound stage’ is developed within a musicological framework as part of a sound paradigm which includes timbre, texture and sound stage. Two Michael Jackson songs (‘The Lady in My Life’ from 1982 and ‘Scream’ from 1995) are used to: a) demonstrate the value of such a conceptualisation, and b) demonstrate that the model has its limits, as record producers in the 1990s began ignoring the conventions of...

  13. What is Sound?

    OpenAIRE

    Nelson, Peter

    2014-01-01

    What is sound? This question is posed in contradiction to the every-day understanding that sound is a phenomenon apart from us, to be heard, made, shaped and organised. Thinking through the history of computer music, and considering the current configuration of digital communi-cations, sound is reconfigured as a type of network. This network is envisaged as non-hierarchical, in keeping with currents of thought that refuse to prioritise the human in the world. The relationship of sound to musi...

  14. Light and Sound

    CERN Document Server

    Karam, P Andrew

    2010-01-01

    Our world is largely defined by what we see and hear-but our uses for light and sound go far beyond simply seeing a photo or hearing a song. A concentrated beam of light, lasers are powerful tools used in industry, research, and medicine, as well as in everyday electronics like DVD and CD players. Ultrasound, sound emitted at a high frequency, helps create images of a developing baby, cleans teeth, and much more. Light and Sound teaches how light and sound work, how they are used in our day-to-day lives, and how they can be used to learn about the universe at large.

  15. Early Sound Symbolism for Vowel Sounds

    Directory of Open Access Journals (Sweden)

    Ferrinne Spector

    2013-06-01

    Full Text Available Children and adults consistently match some words (e.g., kiki to jagged shapes and other words (e.g., bouba to rounded shapes, providing evidence for non-arbitrary sound–shape mapping. In this study, we investigated the influence of vowels on sound–shape matching in toddlers, using four contrasting pairs of nonsense words differing in vowel sound (/i/ as in feet vs. /o/ as in boat and four rounded–jagged shape pairs. Crucially, we used reduplicated syllables (e.g., kiki vs. koko rather than confounding vowel sound with consonant context and syllable variability (e.g., kiki vs. bouba. Toddlers consistently matched words with /o/ to rounded shapes and words with /i/ to jagged shapes (p < 0.01. The results suggest that there may be naturally biased correspondences between vowel sound and shape.

  16. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  17. Great cormorants ( Phalacrocorax carbo) can detect auditory cues while diving

    Science.gov (United States)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula; Larsen, Ole Næsbye; Wahlberg, Magnus

    2017-06-01

    In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant ( Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater than expected, and the hearing thresholds are comparable to seals and toothed whales in the frequency band 1-4 kHz. This opens up the possibility of cormorants and other aquatic birds having special adaptations for underwater hearing and making use of underwater acoustic cues from, e.g., conspecifics, their surroundings, as well as prey and predators.

  18. Distributed acoustic cues for caller identity in macaque vocalization.

    Science.gov (United States)

    Fukushima, Makoto; Doyle, Alex M; Mullarkey, Matthew P; Mishkin, Mortimer; Averbeck, Bruno B

    2015-12-01

    Individual primates can be identified by the sound of their voice. Macaques have demonstrated an ability to discern conspecific identity from a harmonically structured 'coo' call. Voice recognition presumably requires the integrated perception of multiple acoustic features. However, it is unclear how this is achieved, given considerable variability across utterances. Specifically, the extent to which information about caller identity is distributed across multiple features remains elusive. We examined these issues by recording and analysing a large sample of calls from eight macaques. Single acoustic features, including fundamental frequency, duration and Weiner entropy, were informative but unreliable for the statistical classification of caller identity. A combination of multiple features, however, allowed for highly accurate caller identification. A regularized classifier that learned to identify callers from the modulation power spectrum of calls found that specific regions of spectral-temporal modulation were informative for caller identification. These ranges are related to acoustic features such as the call's fundamental frequency and FM sweep direction. We further found that the low-frequency spectrotemporal modulation component contained an indexical cue of the caller body size. Thus, cues for caller identity are distributed across identifiable spectrotemporal components corresponding to laryngeal and supralaryngeal components of vocalizations, and the integration of those cues can enable highly reliable caller identification. Our results demonstrate a clear acoustic basis by which individual macaque vocalizations can be recognized.

  19. The effects of intervening interference on working memory for sound location as a function of inter-comparison interval.

    Science.gov (United States)

    Ries, Dennis T; Hamilton, Traci R; Grossmann, Aurora J

    2010-09-01

    This study examined the effects of inter-comparison interval duration and intervening interference on auditory working memory (AWM) for auditory location. Interaural phase differences were used to produce localization cues for tonal stimuli and the difference limen for interaural phase difference (DL-IPD) specified as the equivalent angle of incidence between two sound sources was measured in five different conditions. These conditions consisted of three different inter-comparison intervals [300 ms (short), 5000 ms (medium), and 15,000 ms (long)], the medium and long of which were presented both in the presence and absence of intervening tones. The presence of intervening stimuli within the medium and long inter-comparison intervals produced a significant increase in the DL-IPD compared to the medium and long inter-comparison intervals condition without intervening tones. The result obtained in the condition with a short inter-comparison interval was roughly equivalent to that obtained for the medium inter-comparison interval without intervening tones. These results suggest that the ability to retain information about the location of a sound within AWM decays slowly; however, the presence of intervening sounds readily disrupts the retention process. Overall, the results suggest that the temporal decay of information within AWM regarding the location of a sound from a listener's environment is so gradual that it can be maintained in trace memory for tens of seconds in the absence of intervening acoustic signals. Conversely, the presence of intervening sounds within the retention interval may facilitate the use of context memory, even for shorter retention intervals, resulting in a less detailed, but relevant representation of the location that is resistant to further degradation. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  20. The Influence of Cue Reliability and Cue Representation on Spatial Reorientation in Young Children

    Science.gov (United States)

    Lyons, Ian M.; Huttenlocher, Janellen; Ratliff, Kristin R.

    2014-01-01

    Previous studies of children's reorientation have focused on cue representation (e.g., whether cues are geometric) as a predictor of performance but have not addressed cue reliability (the regularity of the relation between a given cue and an outcome) as a predictor of performance. Here we address both factors within the same series of…

  1. InfoSound

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.; Gopinath, B.; Haberman, Gary O.

    1990-01-01

    The authors explore ways to enhance users' comprehension of complex applications using music and sound effects to present application-program events that are difficult to detect visually. A prototype system, Infosound, allows developers to create and store musical sequences and sound effects with...

  2. Breaking the Sound Barrier

    Science.gov (United States)

    Brown, Tom; Boehringer, Kim

    2007-01-01

    Students in a fourth-grade class participated in a series of dynamic sound learning centers followed by a dramatic capstone event--an exploration of the amazing Trashcan Whoosh Waves. It's a notoriously difficult subject to teach, but this hands-on, exploratory approach ignited student interest in sound, promoted language acquisition, and built…

  3. Sound propagation in cities

    NARCIS (Netherlands)

    Salomons, E.; Polinder, H.; Lohman, W.; Zhou, H.; Borst, H.

    2009-01-01

    A new engineering model for sound propagation in cities is presented. The model is based on numerical and experimental studies of sound propagation between street canyons. Multiple reflections in the source canyon and the receiver canyon are taken into account in an efficient way, while weak

  4. OMNIDIRECTIONAL SOUND SOURCE

    DEFF Research Database (Denmark)

    1996-01-01

    A sound source comprising a loudspeaker (6) and a hollow coupler (4) with an open inlet which communicates with and is closed by the loudspeaker (6) and an open outlet, said coupler (4) comprising rigid walls which cannot respond to the sound pressures produced by the loudspeaker (6). According...

  5. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  6. Poetry Pages. Sound Effects.

    Science.gov (United States)

    Fina, Allan de

    1992-01-01

    Explains how elementary teachers can help students understand onomatopoeia, suggesting that they define onomatopoeia, share examples of it, read poems and have students discuss onomatopoeic words, act out common household sounds, write about sound effects, and create choral readings of onomatopoeic poems. Two appropriate poems are included. (SM)

  7. Exploring Noise: Sound Pollution.

    Science.gov (United States)

    Rillo, Thomas J.

    1979-01-01

    Part one of a three-part series about noise pollution and its effects on humans. This section presents the background information for teachers who are preparing a unit on sound. The next issues will offer learning activities for measuring the effects of sound and some references. (SA)

  8. Waveform analysis of sound

    CERN Document Server

    Tohyama, Mikio

    2015-01-01

    What is this sound? What does that sound indicate? These are two questions frequently heard in daily conversation. Sound results from the vibrations of elastic media and in daily life provides informative signals of events happening in the surrounding environment. In interpreting auditory sensations, the human ear seems particularly good at extracting the signal signatures from sound waves. Although exploring auditory processing schemes may be beyond our capabilities, source signature analysis is a very attractive area in which signal-processing schemes can be developed using mathematical expressions. This book is inspired by such processing schemes and is oriented to signature analysis of waveforms. Most of the examples in the book are taken from data of sound and vibrations; however, the methods and theories are mostly formulated using mathematical expressions rather than by acoustical interpretation. This book might therefore be attractive and informative for scientists, engineers, researchers, and graduat...

  9. Sound classification of dwellings

    DEFF Research Database (Denmark)

    Rasmussen, Birgit

    2012-01-01

    National schemes for sound classification of dwellings exist in more than ten countries in Europe, typically published as national standards. The schemes define quality classes reflecting different levels of acoustical comfort. Main criteria concern airborne and impact sound insulation between...... dwellings, facade sound insulation and installation noise. The schemes have been developed, implemented and revised gradually since the early 1990s. However, due to lack of coordination between countries, there are significant discrepancies, and new standards and revisions continue to increase the diversity...... is needed, and a European COST Action TU0901 "Integrating and Harmonizing Sound Insulation Aspects in Sustainable Urban Housing Constructions", has been established and runs 2009-2013, one of the main objectives being to prepare a proposal for a European sound classification scheme with a number of quality...

  10. Auditory Verbal Cues Alter the Perceived Flavor of Beverages and Ease of Swallowing: A Psychometric and Electrophysiological Analysis

    Directory of Open Access Journals (Sweden)

    Aya Nakamura

    2013-01-01

    Full Text Available We investigated the possible effects of auditory verbal cues on flavor perception and swallow physiology for younger and elder participants. Apple juice, aojiru (grass juice, and water were ingested with or without auditory verbal cues. Flavor perception and ease of swallowing were measured using a visual analog scale and swallow physiology by surface electromyography and cervical auscultation. The auditory verbal cues had significant positive effects on flavor and ease of swallowing as well as on swallow physiology. The taste score and the ease of swallowing score significantly increased when the participant’s anticipation was primed by accurate auditory verbal cues. There was no significant effect of auditory verbal cues on distaste score. Regardless of age, the maximum suprahyoid muscle activity significantly decreased when a beverage was ingested without auditory verbal cues. The interval between the onset of swallowing sounds and the peak timing point of the infrahyoid muscle activity significantly shortened when the anticipation induced by the cue was contradicted in the elderly participant group. These results suggest that auditory verbal cues can improve the perceived flavor of beverages and swallow physiology.

  11. Behavioural Response Thresholds in New Zealand Crab Megalopae to Ambient Underwater Sound

    Science.gov (United States)

    Stanley, Jenni A.; Radford, Craig A.; Jeffs, Andrew G.

    2011-01-01

    A small number of studies have demonstrated that settlement stage decapod crustaceans are able to detect and exhibit swimming, settlement and metamorphosis responses to ambient underwater sound emanating from coastal reefs. However, the intensity of the acoustic cue required to initiate the settlement and metamorphosis response, and therefore the potential range over which this acoustic cue may operate, is not known. The current study determined the behavioural response thresholds of four species of New Zealand brachyuran crab megalopae by exposing them to different intensity levels of broadcast reef sound recorded from their preferred settlement habitat and from an unfavourable settlement habitat. Megalopae of the rocky-reef crab, Leptograpsus variegatus, exhibited the lowest behavioural response threshold (highest sensitivity), with a significant reduction in time to metamorphosis (TTM) when exposed to underwater reef sound with an intensity of 90 dB re 1 µPa and greater (100, 126 and 135 dB re 1 µPa). Megalopae of the mud crab, Austrohelice crassa, which settle in soft sediment habitats, exhibited no response to any of the underwater reef sound levels. All reef associated species exposed to sound levels from an unfavourable settlement habitat showed no significant change in TTM, even at intensities that were similar to their preferred reef sound for which reductions in TTM were observed. These results indicated that megalopae were able to discern and respond selectively to habitat-specific acoustic cues. The settlement and metamorphosis behavioural response thresholds to levels of underwater reef sound determined in the current study of four species of crabs, enables preliminary estimation of the spatial range at which an acoustic settlement cue may be operating, from 5 m to 40 km depending on the species. Overall, these results indicate that underwater sound is likely to play a major role in influencing the spatial patterns of settlement of coastal crab

  12. Enhanced Excitatory Connectivity and Disturbed Sound Processing in the Auditory Brainstem of Fragile X Mice.

    Science.gov (United States)

    Garcia-Pino, Elisabet; Gessele, Nikodemus; Koch, Ursula

    2017-08-02

    interactions, contributing to their isolation. Here, a mouse model of FXS was used to investigate the auditory brainstem where basic sound information is first processed. Loss of the Fragile X mental retardation protein leads to excessive excitatory compared with inhibitory inputs in neurons extracting information about sound levels. Functionally, this elevated excitation results in increased firing rates, and abnormal coding of frequency and binaural sound localization cues. Imbalanced early-stage sound level processing could partially explain the auditory processing deficits in FXS. Copyright © 2017 the authors 0270-6474/17/377403-17$15.00/0.

  13. Visual cues for data mining

    Science.gov (United States)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  14. The sound manifesto

    Science.gov (United States)

    O'Donnell, Michael J.; Bisnovatyi, Ilia

    2000-11-01

    Computing practice today depends on visual output to drive almost all user interaction. Other senses, such as audition, may be totally neglected, or used tangentially, or used in highly restricted specialized ways. We have excellent audio rendering through D-A conversion, but we lack rich general facilities for modeling and manipulating sound comparable in quality and flexibility to graphics. We need coordinated research in several disciplines to improve the use of sound as an interactive information channel. Incremental and separate improvements in synthesis, analysis, speech processing, audiology, acoustics, music, etc. will not alone produce the radical progress that we seek in sonic practice. We also need to create a new central topic of study in digital audio research. The new topic will assimilate the contributions of different disciplines on a common foundation. The key central concept that we lack is sound as a general-purpose information channel. We must investigate the structure of this information channel, which is driven by the cooperative development of auditory perception and physical sound production. Particular audible encodings, such as speech and music, illuminate sonic information by example, but they are no more sufficient for a characterization than typography is sufficient for characterization of visual information. To develop this new conceptual topic of sonic information structure, we need to integrate insights from a number of different disciplines that deal with sound. In particular, we need to coordinate central and foundational studies of the representational models of sound with specific applications that illuminate the good and bad qualities of these models. Each natural or artificial process that generates informative sound, and each perceptual mechanism that derives information from sound, will teach us something about the right structure to attribute to the sound itself. The new Sound topic will combine the work of computer

  15. Eliciting nicotine craving with virtual smoking cues.

    Science.gov (United States)

    Gamito, Pedro; Oliveira, Jorge; Baptista, André; Morais, Diogo; Lopes, Paulo; Rosa, Pedro; Santos, Nuno; Brito, Rodrigo

    2014-08-01

    Craving is a strong desire to consume that emerges in every case of substance addiction. Previous studies have shown that eliciting craving with an exposure cues protocol can be a useful option for the treatment of nicotine dependence. Thus, the main goal of this study was to develop a virtual platform in order to induce craving in smokers. Fifty-five undergraduate students were randomly assigned to two different virtual environments: high arousal contextual cues and low arousal contextual cues scenarios (17 smokers with low nicotine dependency were excluded). An eye-tracker system was used to evaluate attention toward these cues. Eye fixation on smoking-related cues differed between smokers and nonsmokers, indicating that smokers focused more often on smoking-related cues than nonsmokers. Self-reports of craving are in agreement with these results and suggest a significant increase in craving after exposure to smoking cues. In sum, these data support the use of virtual environments for eliciting craving.

  16. Digitizing a sound archive

    DEFF Research Database (Denmark)

    Cone, Louise

    2017-01-01

    Danish and international artists. His methodology left us with a large collection of unique and inspirational time-based media sound artworks that have, until very recently, been inaccessible. Existing on an array of different media formats, such as open reel tapes, 8-track and 4 track cassettes, VHS......In 1990 an artist by the name of William Louis Sørensen was hired by the National Gallery of Denmark to collect important works of art – made from sound. His job was to acquire sound art, but also recordings that captured rare artistic occurrences, music, performances and happenings from both...

  17. The Effect of Tactile Cues on Auditory Stream Segregation Ability of Musicians and Nonmusicians

    DEFF Research Database (Denmark)

    Slater, Kyle D.; Marozeau, Jeremy

    2016-01-01

    Difficulty perceiving music is often cited as one of the main problems facing hearing-impaired listeners. It has been suggested that musical enjoyment could be enhanced if sound information absent due to impairment is transmitted via other sensory modalities such as vision or touch. In this study...... the random melody. Tactile cues were applied to the listener’s fingers on half of the blocks. Results showed that tactile cues can significantly improve the melodic segregation ability in both musician and nonmusician groups in challenging listening conditions. Overall, the musician group performance...... was always better; however, the magnitude of improvement with the introduction of tactile cues was similar in both groups. This study suggests that hearing-impaired listeners could potentially benefit from a system transmitting such information via a tactile modality...

  18. Estimating the relative weights of visual and auditory tau versus heuristic-based cues for time-to-contact judgments in realistic, familiar scenes by older and younger adults.

    Science.gov (United States)

    Keshavarz, Behrang; Campos, Jennifer L; DeLucia, Patricia R; Oberfeld, Daniel

    2017-04-01

    Estimating time to contact (TTC) involves multiple sensory systems, including vision and audition. Previous findings suggested that the ratio of an object's instantaneous optical size/sound intensity to its instantaneous rate of change in optical size/sound intensity (τ) drives TTC judgments. Other evidence has shown that heuristic-based cues are used, including final optical size or final sound pressure level. Most previous studies have used decontextualized and unfamiliar stimuli (e.g., geometric shapes on a blank background). Here we evaluated TTC estimates by using a traffic scene with an approaching vehicle to evaluate the weights of visual and auditory TTC cues under more realistic conditions. Younger (18-39 years) and older (65+ years) participants made TTC estimates in three sensory conditions: visual-only, auditory-only, and audio-visual. Stimuli were presented within an immersive virtual-reality environment, and cue weights were calculated for both visual cues (e.g., visual τ, final optical size) and auditory cues (e.g., auditory τ, final sound pressure level). The results demonstrated the use of visual τ as well as heuristic cues in the visual-only condition. TTC estimates in the auditory-only condition, however, were primarily based on an auditory heuristic cue (final sound pressure level), rather than on auditory τ. In the audio-visual condition, the visual cues dominated overall, with the highest weight being assigned to visual τ by younger adults, and a more equal weighting of visual τ and heuristic cues in older adults. Overall, better characterizing the effects of combined sensory inputs, stimulus characteristics, and age on the cues used to estimate TTC will provide important insights into how these factors may affect everyday behavior.

  19. Sounds of Web Advertising

    DEFF Research Database (Denmark)

    Jessen, Iben Bredahl; Graakjær, Nicolai Jørgensgaard

    2010-01-01

    Sound seems to be a neglected issue in the study of web ads. Web advertising is predominantly regarded as visual phenomena–commercial messages, as for instance banner ads that we watch, read, and eventually click on–but only rarely as something that we listen to. The present chapter presents...... an overview of the auditory dimensions in web advertising: Which kinds of sounds do we hear in web ads? What are the conditions and functions of sound in web ads? Moreover, the chapter proposes a theoretical framework in order to analyse the communicative functions of sound in web advertising. The main...... argument is that an understanding of the auditory dimensions in web advertising must include a reflection on the hypertextual settings of the web ad as well as a perspective on how users engage with web content....

  20. Sound Art Situations

    DEFF Research Database (Denmark)

    Krogh Groth, Sanne; Samson, Kristine

    2017-01-01

    and combine theories from several fields. Aspects of sound art studies, performance studies and contemporary art studies are presented in order to theoretically explore the very diverse dimensions of the two sound art pieces: Visual, auditory, performative, social, spatial and durational dimensions become......This article is an analysis of two sound art performances that took place June 2015 in outdoor public spaces in the social housing area Urbanplanen in Copenhagen, Denmark. The two performances were On the production of a poor acoustics by Brandon LaBelle and Green Interactive Biofeedback...... Environments (GIBE) by Jeremy Woodruff. In order to investigate the complex situation that arises when sound art is staged in such contexts, the authors of this article suggest exploring the events through approaching them as ‘situations’ (Doherty 2009). With this approach it becomes possible to engage...

  1. Sound as Popular Culture

    DEFF Research Database (Denmark)

    The wide-ranging texts in this book take as their premise the idea that sound is a subject through which popular culture can be analyzed in an innovative way. From an infant’s gurgles over a baby monitor to the roar of the crowd in a stadium to the sub-bass frequencies produced by sound systems...... in the disco era, sound—not necessarily aestheticized as music—is inextricably part of the many domains of popular culture. Expanding the view taken by many scholars of cultural studies, the contributors consider cultural practices concerning sound not merely as semiotic or signifying processes but as material......, physical, perceptual, and sensory processes that integrate a multitude of cultural traditions and forms of knowledge. The chapters discuss conceptual issues as well as terminologies and research methods; analyze historical and contemporary case studies of listening in various sound cultures; and consider...

  2. It sounds good!

    CERN Multimedia

    CERN Bulletin

    2010-01-01

    Both the atmosphere and we ourselves are hit by hundreds of particles every second and yet nobody has ever heard a sound coming from these processes. Like cosmic rays, particles interacting inside the detectors at the LHC do not make any noise…unless you've decided to use the ‘sonification’ technique, in which case you might even hear the Higgs boson sound like music. Screenshot of the first page of the "LHC sound" site. A group of particle physicists, composers, software developers and artists recently got involved in the ‘LHC sound’ project to make the particles at the LHC produce music. Yes…music! The ‘sonification’ technique converts data into sound. “In this way, if you implement the right software you can get really nice music out of the particle tracks”, says Lily Asquith, a member of the ATLAS collaboration and one of the initiators of the project. The ‘LHC...

  3. Sound Visualization and Holography

    Science.gov (United States)

    Kock, Winston E.

    1975-01-01

    Describes liquid surface holograms including their application to medicine. Discusses interference and diffraction phenomena using sound wave scanning techniques. Compares focussing by zone plate to holographic image development. (GH)

  4. March 1964 Prince William Sound, USA Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Prince William Sound magnitude 9.2 Mw earthquake on March 28, 1964 at 03:36 GMT (March 27 at 5:36 pm local time), was the largest U.S. earthquake ever recorded...

  5. Sound Stuff? Naïve materialism in middle-school students' conceptions of sound

    Science.gov (United States)

    Eshach, Haim; Schwartz, Judah L.

    2006-06-01

    Few studies have dealt with students’ preconceptions of sounds. The current research employs Reiner et al. (2000) substance schema to reveal new insights about students’ difficulties in understanding this fundamental topic. It aims not only to detect whether the substance schema is present in middle school students’ thinking, but also examines how students use the schema’s properties. It asks, moreover, whether the substance schema properties are used as islands of local consistency or whether one can identify more global coherent consistencies among the properties that the students use to explain the sound phenomena. In-depth standardized open-ended interviews were conducted with ten middle school students. Consistent with the substance schema, sound was perceived by our participants as being pushable, frictional, containable, or transitional. However, sound was also viewed as a substance different from the ordinary with respect to its stability, corpuscular nature, additive properties, and inertial characteristics. In other words, students’ conceptions of sound do not seem to fit Reiner et al.’s schema in all respects. Our results also indicate that students’ conceptualization of sound lack internal consistency. Analyzing our results with respect to local and global coherence, we found students’ conception of sound is close to diSessa’s “loosely connected, fragmented collection of ideas.” The notion that sound is perceived only as a “sort of a material,” we believe, requires some revision of the substance schema as it applies to sound. The article closes with a discussion concerning the implications of the results for instruction.

  6. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  7. The upper lithostratigraphic unit of ANDRILL AND-2A core (Southern McMurdo Sound, Antarctica): local Pleistocene volcanic sources, paleoenvironmental implications and subsidence in the southern Victoria Land Basin

    Science.gov (United States)

    Del Carlo, P.; Panter, K. S.; Bassett, K. N.; Bracciali, L.; di Vincenzo, G.; Rocchi, S.

    2009-12-01

    We report results from the study of the uppermost 37 meters of the Southern McMurdo Sound (SMS) AND-2A drillcore, corresponding to the lithostratigraphic unit 1 (LSU 1), the most volcanogenic unit within the core. Nearly all of LSU 1 consists of volcanic breccia and sandstone that is a mixture of near primary volcanic material dominated by lava and vitric clasts with minor exotic material derived from distal basement sources. Lava clasts and glass are mafic and range from strongly alkaline (basanite, tephrite) to moderately alkaline (alkali basalt, hawaiite) compositions that are similar to nearby land deposits. 40Ar-39Ar laser step-heating analyses on groundmass separated from lava clasts yield Pleistocene ages (692±38 and 793±63, ±2σ internal errors). Volcanoes of the Dailey Island group, located ~13 km SW of the drillsite, are a possible source for the volcanic materials based on their close proximity, similar composition and age. A basanite lava flow on Juergens Island yields a comparable Pleistocene age of 775±22 ka. Yet there is evidence to suggest that the volcanic source is much closer to the drillsite and that the sediments were deposited in much shallower water relative to the present-day water depth of 384 mbsl. Evidence for local volcanic activity is based in part on the common occurrence of delicate vitriclasts (e.g. glass shards and Pele’s hair) and a minimally reworked ~2 meter thick monomict breccia that is interpreted to have formed by autobrecciating lava. In addition, conical-shaped seamounts and high frequency magnetic anomalies encompass the drillsite and extend south including the volcanoes of the Dailey Islands. Sedimentary features and structures indicate shallow water sedimentation for the whole of LSU 1. Rippled asymmetric cross-laminated sands and hummocky cross-stratification occur intermittently throughout LSU 1 and indicate water depths shallower than 100 meters. The occurrence of ooliths and layers containing siderite and Fe

  8. Retrieval-induced forgetting and interference between cues:Training a cue-outcome association attenuates retrieval by alternative cues

    OpenAIRE

    Ortega-Castro, Nerea; Vadillo Nistal, Miguel

    2013-01-01

    Some researchers have attempted to determine whether situations in which a single cue is paired with several outcomes (A-B, A-C interference or interference between outcomes) involve the same learning and retrieval mechanisms as situations in which several cues are paired with a single outcome (A-B, C-B interference or interference between cues). Interestingly, current research on a related effect, which is known as retrieval-induced forgetting, can illuminate this debate. Most retrieval-indu...

  9. Dementias show differential physiological responses to salient sounds.

    Science.gov (United States)

    Fletcher, Phillip D; Nicholas, Jennifer M; Shakespeare, Timothy J; Downey, Laura E; Golden, Hannah L; Agustus, Jennifer L; Clark, Camilla N; Mummery, Catherine J; Schott, Jonathan M; Crutch, Sebastian J; Warren, Jason D

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching ("looming") or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  10. Dementias show differential physiological responses to salient sounds

    Directory of Open Access Journals (Sweden)

    Phillip David Fletcher

    2015-03-01

    Full Text Available Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (‘looming’ or less salient withdrawing sounds. Pupil dilatation responses and behavioural rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n=10; behavioural variant frontotemporal dementia, n=16, progressive non-fluent aphasia, n=12; amnestic Alzheimer’s disease, n=10 and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioural response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer’s disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases.

  11. Dementias show differential physiological responses to salient sounds

    Science.gov (United States)

    Fletcher, Phillip D.; Nicholas, Jennifer M.; Shakespeare, Timothy J.; Downey, Laura E.; Golden, Hannah L.; Agustus, Jennifer L.; Clark, Camilla N.; Mummery, Catherine J.; Schott, Jonathan M.; Crutch, Sebastian J.; Warren, Jason D.

    2015-01-01

    Abnormal responsiveness to salient sensory signals is often a prominent feature of dementia diseases, particularly the frontotemporal lobar degenerations, but has been little studied. Here we assessed processing of one important class of salient signals, looming sounds, in canonical dementia syndromes. We manipulated tones using intensity cues to create percepts of salient approaching (“looming”) or less salient withdrawing sounds. Pupil dilatation responses and behavioral rating responses to these stimuli were compared in patients fulfilling consensus criteria for dementia syndromes (semantic dementia, n = 10; behavioral variant frontotemporal dementia, n = 16, progressive nonfluent aphasia, n = 12; amnestic Alzheimer's disease, n = 10) and a cohort of 26 healthy age-matched individuals. Approaching sounds were rated as more salient than withdrawing sounds by healthy older individuals but this behavioral response to salience did not differentiate healthy individuals from patients with dementia syndromes. Pupil responses to approaching sounds were greater than responses to withdrawing sounds in healthy older individuals and in patients with semantic dementia: this differential pupil response was reduced in patients with progressive nonfluent aphasia and Alzheimer's disease relative both to the healthy control and semantic dementia groups, and did not correlate with nonverbal auditory semantic function. Autonomic responses to auditory salience are differentially affected by dementias and may constitute a novel biomarker of these diseases. PMID:25859194

  12. Design guidelines for the use of audio cues in computer interfaces

    Energy Technology Data Exchange (ETDEWEB)

    Sumikawa, D.A.; Blattner, M.M.; Joy, K.I.; Greenberg, R.M.

    1985-07-01

    A logical next step in the evolution of the computer-user interface is the incorporation of sound thereby using our senses of ''hearing'' in our communication with the computer. This allows our visual and auditory capacities to work in unison leading to a more effective and efficient interpretation of information received from the computer than by sight alone. In this paper we examine earcons, which are audio cues, used in the computer-user interface to provide information and feedback to the user about computer entities (these include messages and functions, as well as states and labels). The material in this paper is part of a larger study that recommends guidelines for the design and use of audio cues in the computer-user interface. The complete work examines the disciplines of music, psychology, communication theory, advertising, and psychoacoustics to discover how sound is utilized and analyzed in those areas. The resulting information is organized according to the theory of semiotics, the theory of signs, into the syntax, semantics, and pragmatics of communication by sound. Here we present design guidelines for the syntax of earcons. Earcons are constructed from motives, short sequences of notes with a specific rhythm and pitch, embellished by timbre, dynamics, and register. Compound earcons and family earcons are introduced. These are related motives that serve to identify a family of related cues. Examples of earcons are given.

  13. Propagation of Sound in a Bose-Einstein Condensate

    International Nuclear Information System (INIS)

    Andrews, M.R.; Kurn, D.M.; Miesner, H.; Durfee, D.S.; Townsend, C.G.; Inouye, S.; Ketterle, W.

    1997-01-01

    Sound propagation has been studied in a magnetically trapped dilute Bose-Einstein condensate. Localized excitations were induced by suddenly modifying the trapping potential using the optical dipole force of a focused laser beam. The resulting propagation of sound was observed using a novel technique, rapid sequencing of nondestructive phase-contrast images. The speed of sound was determined as a function of density and found to be consistent with Bogoliubov theory. This method may generally be used to observe high-lying modes and perhaps second sound. copyright 1997 The American Physical Society

  14. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    Sound is a part of architecture, and sound is complex. Upon this, sound is invisible. How is it then possible to design visual objects that interact with the sound? This paper addresses the problem of how to get access to the complexity of sound and how to make textile material revealing the form...... goemetry by analysing the sound pattern at a specific spot. This analysis is done theoretically with algorithmic systems and practical with waves in water. The paper describes the experiments and the findings, and explains how an analysis of sound can be catched in a textile form....

  15. Post-cueing deficits with maintained cueing benefits in patients with Parkinson's disease dementia

    Directory of Open Access Journals (Sweden)

    Susanne eGräber

    2014-11-01

    Full Text Available In Parkinson’s disease (PD internal cueing mechanisms are impaired leading to symptoms such as like hypokinesia. However external cues can improve movement execution by using cortical resources. These cortical processes can be affected by cognitive decline in dementia.It is still unclear how dementia in PD influences external cueing. We investigated a group of 25 PD patients with dementia (PDD and 25 non-demented PD patients (PDnD matched by age, sex and disease duration in a simple reaction time (SRT task using an additional acoustic cue. PDD patients benefited from the additional cue in similar magnitude as did PDnD patients. However, withdrawal of the cue led to a significantly increased reaction time in the PDD group compared to the PDnD patients. Our results indicate that even PDD patients can benefit from strategies using external cue presentation but the process of cognitive worsening can reduce the effect when cues are withdrawn.

  16. Cue-reactors: individual differences in cue-induced craving after food or smoking abstinence.

    Science.gov (United States)

    Mahler, Stephen V; de Wit, Harriet

    2010-11-10

    Pavlovian conditioning plays a critical role in both drug addiction and binge eating. Recent animal research suggests that certain individuals are highly sensitive to conditioned cues, whether they signal food or drugs. Are certain humans also more reactive to both food and drug cues? We examined cue-induced craving for both cigarettes and food, in the same individuals (n = 15 adult smokers). Subjects viewed smoking-related or food-related images after abstaining from either smoking or eating. Certain individuals reported strong cue-induced craving after both smoking and food cues. That is, subjects who reported strong cue-induced craving for cigarettes also rated stronger cue-induced food craving. In humans, like in nonhumans, there may be a "cue-reactive" phenotype, consisting of individuals who are highly sensitive to conditioned stimuli. This finding extends recent reports from nonhuman studies. Further understanding this subgroup of smokers may allow clinicians to individually tailor therapies for smoking cessation.

  17. Visible propagation from invisible exogenous cueing.

    Science.gov (United States)

    Lin, Zhicheng; Murray, Scott O

    2013-09-20

    Perception and performance is affected not just by what we see but also by what we do not see-inputs that escape our awareness. While conscious processing and unconscious processing have been assumed to be separate and independent, here we report the propagation of unconscious exogenous cueing as determined by conscious motion perception. In a paradigm combining masked exogenous cueing and apparent motion, we show that, when an onset cue was rendered invisible, the unconscious exogenous cueing effect traveled, manifesting at uncued locations (4° apart) in accordance with conscious perception of visual motion; the effect diminished when the cue-to-target distance was 8° apart. In contrast, conscious exogenous cueing manifested in both distances. Further evidence reveals that the unconscious and conscious nonretinotopic effects could not be explained by an attentional gradient, nor by bottom-up, energy-based motion mechanisms, but rather they were subserved by top-down, tracking-based motion mechanisms. We thus term these effects mobile cueing. Taken together, unconscious mobile cueing effects (a) demonstrate a previously unknown degree of flexibility of unconscious exogenous attention; (b) embody a simultaneous dissociation and association of attention and consciousness, in which exogenous attention can occur without cue awareness ("dissociation"), yet at the same time its effect is contingent on conscious motion tracking ("association"); and (c) underscore the interaction of conscious and unconscious processing, providing evidence for an unconscious effect that is not automatic but controlled.

  18. Oyster larvae settle in response to habitat-associated underwater sounds.

    Science.gov (United States)

    Lillis, Ashlee; Eggleston, David B; Bohnenstiehl, DelWayne R

    2013-01-01

    Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica). Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a role in driving

  19. Oyster larvae settle in response to habitat-associated underwater sounds.

    Directory of Open Access Journals (Sweden)

    Ashlee Lillis

    Full Text Available Following a planktonic dispersal period of days to months, the larvae of benthic marine organisms must locate suitable seafloor habitat in which to settle and metamorphose. For animals that are sessile or sedentary as adults, settlement onto substrates that are adequate for survival and reproduction is particularly critical, yet represents a challenge since patchily distributed settlement sites may be difficult to find along a coast or within an estuary. Recent studies have demonstrated that the underwater soundscape, the distinct sounds that emanate from habitats and contain information about their biological and physical characteristics, may serve as broad-scale environmental cue for marine larvae to find satisfactory settlement sites. Here, we contrast the acoustic characteristics of oyster reef and off-reef soft bottoms, and investigate the effect of habitat-associated estuarine sound on the settlement patterns of an economically and ecologically important reef-building bivalve, the Eastern oyster (Crassostrea virginica. Subtidal oyster reefs in coastal North Carolina, USA show distinct acoustic signatures compared to adjacent off-reef soft bottom habitats, characterized by consistently higher levels of sound in the 1.5-20 kHz range. Manipulative laboratory playback experiments found increased settlement in larval oyster cultures exposed to oyster reef sound compared to unstructured soft bottom sound or no sound treatments. In field experiments, ambient reef sound produced higher levels of oyster settlement in larval cultures than did off-reef sound treatments. The results suggest that oyster larvae have the ability to respond to sounds indicative of optimal settlement sites, and this is the first evidence that habitat-related differences in estuarine sounds influence the settlement of a mollusk. Habitat-specific sound characteristics may represent an important settlement and habitat selection cue for estuarine invertebrates and could play a

  20. Contextual cueing by global features

    Science.gov (United States)

    Kunar, Melina A.; Flusberg, Stephen J.; Wolfe, Jeremy M.

    2008-01-01

    In visual search tasks, attention can be guided to a target item, appearing amidst distractors, on the basis of simple features (e.g. find the red letter among green). Chun and Jiang’s (1998) “contextual cueing” effect shows that RTs are also speeded if the spatial configuration of items in a scene is repeated over time. In these studies we ask if global properties of the scene can speed search (e.g. if the display is mostly red, then the target is at location X). In Experiment 1a, the overall background color of the display predicted the target location. Here the predictive color could appear 0, 400 or 800 msec in advance of the search array. Mean RTs are faster in predictive than in non-predictive conditions. However, there is little improvement in search slopes. The global color cue did not improve search efficiency. Experiments 1b-1f replicate this effect using different predictive properties (e.g. background orientation/texture, stimuli color etc.). The results show a strong RT effect of predictive background but (at best) only a weak improvement in search efficiency. A strong improvement in efficiency was found, however, when the informative background was presented 1500 msec prior to the onset of the search stimuli and when observers were given explicit instructions to use the cue (Experiment 2). PMID:17355043

  1. A configural dominant account of contextual cueing : configural cues are stronger than colour cues

    OpenAIRE

    Kunar, Melina A.; Johnston, Rebecca; Sweetman, Hollie

    2013-01-01

    Previous work has shown that reaction times to find a target in displays that have been repeated are faster than those for displays that have never been seen before. This learning effect, termed “contextual cueing” (CC), has been shown using contexts such as the configuration of the distractors in the display and the background colour. However, it is not clear how these two contexts interact to facilitate search. We investigated this here by comparing the strengths of these two cues when they...

  2. Sustained Magnetic Responses in Temporal Cortex Reflect Instantaneous Significance of Approaching and Receding Sounds.

    Directory of Open Access Journals (Sweden)

    Dominik R Bach

    Full Text Available Rising sound intensity often signals an approaching sound source and can serve as a powerful warning cue, eliciting phasic attention, perception biases and emotional responses. How the evaluation of approaching sounds unfolds over time remains elusive. Here, we capitalised on the temporal resolution of magnetoencephalograpy (MEG to investigate in humans a dynamic encoding of perceiving approaching and receding sounds. We compared magnetic responses to intensity envelopes of complex sounds to those of white noise sounds, in which intensity change is not perceived as approaching. Sustained magnetic fields over temporal sensors tracked intensity change in complex sounds in an approximately linear fashion, an effect not seen for intensity change in white noise sounds, or for overall intensity. Hence, these fields are likely to track approach/recession, but not the apparent (instantaneous distance of the sound source, or its intensity as such. As a likely source of this activity, the bilateral inferior temporal gyrus and right temporo-parietal junction emerged. Our results indicate that discrete temporal cortical areas parametrically encode behavioural significance in moving sound sources where the signal unfolded in a manner reminiscent of evidence accumulation. This may help an understanding of how acoustic percepts are evaluated as behaviourally relevant, where our results highlight a crucial role of cortical areas.

  3. Retrieval of bilingual autobiographical memories: effects of cue language and cue imageability.

    Science.gov (United States)

    Mortensen, Linda; Berntsen, Dorthe; Bohn, Ocke-Schwen

    2015-01-01

    An important issue in theories of bilingual autobiographical memory is whether linguistically encoded memories are represented in language-specific stores or in a common language-independent store. Previous research has found that autobiographical memory retrieval is facilitated when the language of the cue is the same as the language of encoding, consistent with language-specific memory stores. The present study examined whether this language congruency effect is influenced by cue imageability. Danish-English bilinguals retrieved autobiographical memories in response to Danish and English high- or low-imageability cues. Retrieval latencies were shorter to Danish than English cues and shorter to high- than low-imageability cues. Importantly, the cue language effect was stronger for low-than high-imageability cues. To examine the relationship between cue language and the language of internal retrieval, participants identified the language in which the memories were internally retrieved. More memories were retrieved when the cue language was the same as the internal language than when the cue was in the other language, and more memories were identified as being internally retrieved in Danish than English, regardless of the cue language. These results provide further evidence for language congruency effects in bilingual memory and suggest that this effect is influenced by cue imageability.

  4. Cue combination encoding via contextual modulation of V1 and V2 neurons

    Directory of Open Access Journals (Sweden)

    Zarella MD

    2016-10-01

    Full Text Available Mark D Zarella, Daniel Y Ts’o Department of Neurosurgery, SUNY Upstate Medical University, Syracuse, NY, USA Abstract: Neurons in early visual cortical areas encode the local properties of a stimulus in a number of different feature dimensions such as color, orientation, and motion. It has been shown, however, that stimuli presented well beyond the confines of the classical receptive field can augment these responses in a way that emphasizes these local attributes within the greater context of the visual scene. This mechanism imparts global information to cells that are otherwise considered local feature detectors and can potentially serve as an important foundation for surface segmentation, texture representation, and figure–ground segregation. The role of early visual cortex toward these functions remains somewhat of an enigma, as it is unclear how surface segmentation cues are integrated from multiple feature dimensions. We examined the impact of orientation- and motion-defined surface segmentation cues in V1 and V2 neurons using a stimulus in which the two features are completely separable. We find that, although some cells are modulated in a cue-invariant manner, many cells are influenced by only one cue or the other. Furthermore, cells that are modulated by both cues tend to be more strongly affected when both cues are presented together than when presented individually. These results demonstrate two mechanisms by which cue combinations can enhance salience. We find that feature-specific populations are more frequently encountered in V1, while cue additivity is more prominent in V2. These results highlight how two strongly interconnected areas at different stages in the cortical hierarchy can potentially contribute to scene segmentation. Keywords: striate, extrastriate, extraclassical, texture, segmentation

  5. Analysis of engagement behavior in children during dyadic interactions using prosodic cues.

    Science.gov (United States)

    Gupta, Rahul; Bone, Daniel; Lee, Sungbok; Narayanan, Shrikanth

    2016-05-01

    Child engagement is defined as the interaction of a child with his/her environment in a contextually appropriate manner. Engagement behavior in children is linked to socio-emotional and cognitive state assessment with enhanced engagement identified with improved skills. A vast majority of studies however rely solely, and often implicitly, on subjective perceptual measures of engagement. Access to automatic quantification could assist researchers/clinicians to objectively interpret engagement with respect to a target behavior or condition, and furthermore inform mechanisms for improving engagement in various settings. In this paper, we present an engagement prediction system based exclusively on vocal cues observed during structured interaction between a child and a psychologist involving several tasks. Specifically, we derive prosodic cues that capture engagement levels across the various tasks. Our experiments suggest that a child's engagement is reflected not only in the vocalizations, but also in the speech of the interacting psychologist. Moreover, we show that prosodic cues are informative of the engagement phenomena not only as characterized over the entire task (i.e., global cues), but also in short term patterns (i.e., local cues). We perform a classification experiment assigning the engagement of a child into three discrete levels achieving an unweighted average recall of 55.8% (chance is 33.3%). While the systems using global cues and local level cues are each statistically significant in predicting engagement, we obtain the best results after fusing these two components. We perform further analysis of the cues at local and global levels to achieve insights linking specific prosodic patterns to the engagement phenomenon. We observe that while the performance of our model varies with task setting and interacting psychologist, there exist universal prosodic patterns reflective of engagement.

  6. The cue is key : design for real-life remembering

    NARCIS (Netherlands)

    Hoven, van den E.A.W.H.; Eggen, J.H.

    2014-01-01

    This paper aims to put the memory cue in the spotlight. We will show how memory cues are incorporated in the area of interaction design. The focus will be on external memory cues: cues that exist outside the human mind but have an internal effect on memory reconstruction. Examples of external cues

  7. Sound & The Society

    DEFF Research Database (Denmark)

    Schulze, Holger

    2014-01-01

    How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions and their ...... and their professional design? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Nina Backmann, Jochen Bonz, Stefan Krebs, Esther Schelander & Holger Schulze......How are those sounds you hear right now socially constructed and evaluated, how are they architecturally conceptualized and how dependant on urban planning, industrial developments and political decisions are they really? How is your ability to hear intertwined with social interactions...

  8. Urban Sound Interfaces

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2012-01-01

    This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live. In this pa......This paper draws on the theories of Michel de Certeau and Gaston Bachelard to discuss how media architecture, in the form of urban sound interfaces, can help us perceive the complexity of the spaces we inhabit, by exploring the history and the narratives of the places in which we live....... In this paper, three sound works are discussed in relation to the iPod, which is considered as a more private way to explore urban environments, and as a way to control the individual perception of urban spaces....

  9. Predicting outdoor sound

    CERN Document Server

    Attenborough, Keith; Horoshenkov, Kirill

    2014-01-01

    1. Introduction  2. The Propagation of Sound Near Ground Surfaces in a Homogeneous Medium  3. Predicting the Acoustical Properties of Outdoor Ground Surfaces  4. Measurements of the Acoustical Properties of Ground Surfaces and Comparisons with Models  5. Predicting Effects of Source Characteristics on Outdoor Sound  6. Predictions, Approximations and Empirical Results for Ground Effect Excluding Meteorological Effects  7. Influence of Source Motion on Ground Effect and Diffraction  8. Predicting Effects of Mixed Impedance Ground  9. Predicting the Performance of Outdoor Noise Barriers  10. Predicting Effects of Vegetation, Trees and Turbulence  11. Analytical Approximations including Ground Effect, Refraction and Turbulence  12. Prediction Schemes  13. Predicting Sound in an Urban Environment.

  10. Parents accidentally substitute similar sounding sibling names more often than dissimilar names.

    Directory of Open Access Journals (Sweden)

    Zenzi M Griffin

    Full Text Available When parents select similar sounding names for their children, do they set themselves up for more speech errors in the future? Questionnaire data from 334 respondents suggest that they do. Respondents whose names shared initial or final sounds with a sibling's reported that their parents accidentally called them by the sibling's name more often than those without such name overlap. Having a sibling of the same gender, similar appearance, or similar age was also associated with more frequent name substitutions. Almost all other name substitutions by parents involved other family members and over 5% of respondents reported a parent substituting the name of a pet, which suggests a strong role for social and situational cues in retrieving personal names for direct address. To the extent that retrieval cues are shared with other people or animals, other names become available and may substitute for the intended name, particularly when names sound similar.

  11. Sound & The Senses

    DEFF Research Database (Denmark)

    Schulze, Holger

    2012-01-01

    How are those sounds you hear right now technically generated and post-produced, how are they aesthetically conceptualized and how culturally dependant are they really? How is your ability to hear intertwined with all the other senses and their cultural, biographical and technological constructio...... over time? And how is listening and sounding a deeply social activity – constructing our way of living together in cities as well as in apartment houses? A radio feature with Jonathan Sterne, AGF a.k.a Antye Greie, Jens Gerrit Papenburg & Holger Schulze....

  12. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2013-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers. All audio topics are explored: if you work on anything related to audio you should not be without this book! The 4th edition of this trusted reference has been updated to reflect changes in the industry since the publication of the 3rd edition in 2002 -- including new technologies like software-based recording systems such as Pro Tools and Sound Forge; digital recording using MP3, wave files and others; mobile audio devices such as iPods and MP3 players. Over 40 topic

  13. Sound for digital video

    CERN Document Server

    Holman, Tomlinson

    2013-01-01

    Achieve professional quality sound on a limited budget! Harness all new, Hollywood style audio techniques to bring your independent film and video productions to the next level.In Sound for Digital Video, Second Edition industry experts Tomlinson Holman and Arthur Baum give you the tools and knowledge to apply recent advances in audio capture, video recording, editing workflow, and mixing to your own film or video with stunning results. This fresh edition is chockfull of techniques, tricks, and workflow secrets that you can apply to your own projects from preproduction

  14. Beacons of Sound

    DEFF Research Database (Denmark)

    Knakkergaard, Martin

    2018-01-01

    The chapter discusses expectations and imaginations vis-à-vis the concert hall of the twenty-first century. It outlines some of the central historical implications of western culture’s haven for sounding music. Based on the author’s study of the Icelandic concert-house Harpa, the chapter considers...... how these implications, together with the prime mover’s visions, have been transformed as private investors and politicians took over. The chapter furthermore investigates the objectives regarding musical sound and the far-reaching demands concerning acoustics that modern concert halls are required...

  15. Neuroplasticity beyond sounds

    DEFF Research Database (Denmark)

    Reybrouck, Mark; Brattico, Elvira

    2015-01-01

    Capitalizing from neuroscience knowledge on how individuals are affected by the sound environment, we propose to adopt a cybernetic and ecological point of view on the musical aesthetic experience, which includes subprocesses, such as feature extraction and integration, early affective reactions...... and motor actions, style mastering and conceptualization, emotion and proprioception, evaluation and preference. In this perspective, the role of the listener/composer/performer is seen as that of an active "agent" coping in highly individual ways with the sounds. The findings concerning the neural...

  16. Eliciting Sound Memories.

    Science.gov (United States)

    Harris, Anna

    2015-11-01

    Sensory experiences are often considered triggers of memory, most famously a little French cake dipped in lime blossom tea. Sense memory can also be evoked in public history research through techniques of elicitation. In this article I reflect on different social science methods for eliciting sound memories such as the use of sonic prompts, emplaced interviewing, and sound walks. I include examples from my research on medical listening. The article considers the relevance of this work for the conduct of oral histories, arguing that such methods "break the frame," allowing room for collaborative research connections and insights into the otherwise unarticulatable.

  17. SoleSound

    DEFF Research Database (Denmark)

    Zanotto, Damiano; Turchet, Luca; Boggs, Emily Marie

    2014-01-01

    This paper introduces the design of SoleSound, a wearable system designed to deliver ecological, audio-tactile, underfoot feedback. The device, which primarily targets clinical applications, uses an audio-tactile footstep synthesis engine informed by the readings of pressure and inertial sensors...... embedded in the footwear to integrate enhanced feedback modalities into the authors' previously developed instrumented footwear. The synthesis models currently implemented in the SoleSound simulate different ground surface interactions. Unlike similar devices, the system presented here is fully portable...

  18. The Accuracy Enhancing Effect of Biasing Cues

    NARCIS (Netherlands)

    W. Vanhouche (Wouter); S.M.J. van Osselaer (Stijn)

    2009-01-01

    textabstractExtrinsic cues such as price and irrelevant attributes have been shown to bias consumers’ product judgments. Results in this article replicate those findings in pretrial judgments but show that such biasing cues can improve quality judgments at a later point in time. Initially biasing

  19. Auditory Emotional Cues Enhance Visual Perception

    Science.gov (United States)

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  20. Cue Reliance in L2 Written Production

    Science.gov (United States)

    Wiechmann, Daniel; Kerz, Elma

    2014-01-01

    Second language learners reach expert levels in relative cue weighting only gradually. On the basis of ensemble machine learning models fit to naturalistic written productions of German advanced learners of English and expert writers, we set out to reverse engineer differences in the weighting of multiple cues in a clause linearization problem. We…

  1. Contextual Cueing Effects across the Lifespan

    Science.gov (United States)

    Merrill, Edward C.; Conners, Frances A.; Roskos, Beverly; Klinger, Mark R.; Klinger, Laura Grofer

    2013-01-01

    The authors evaluated age-related variations in contextual cueing, which reflects the extent to which visuospatial regularities can facilitate search for a target. Previous research produced inconsistent results regarding contextual cueing effects in young children and in older adults, and no study has investigated the phenomenon across the life…

  2. Cues for haptic perception of compliance

    NARCIS (Netherlands)

    Bergmann Tiest, W.M.; Kappers, A.M.L.

    2009-01-01

    For the perception of the hardness of compliant materials, several cues are available. In this paper, the relative roles of force/displacement and surface deformation cues are investigated. We have measured discrimination thresholds with silicone rubber stimuli of differing thickness and compliance.

  3. How rats combine temporal cues.

    Science.gov (United States)

    Guilhardi, Paulo; Keen, Richard; MacInnis, Mika L M; Church, Russell M

    2005-05-31

    The procedures for classical and operant conditioning, and for many timing procedures, involve the delivery of reinforcers that may be related to the time of previous reinforcers and responses, and to the time of onsets and terminations of stimuli. The behavior resulting from such procedures can be described as bouts of responding that occur in some pattern at some rate. A packet theory of timing and conditioning is described that accounts for such behavior under a wide range of procedures. Applications include the food searching by rats in Skinner boxes under conditions of fixed and random reinforcement, brief and sustained stimuli, and several response-food contingencies. The approach is used to describe how multiple cues from reinforcers and stimuli combine to determine the rate and pattern of response bouts.

  4. Kin-informative recognition cues in ants

    DEFF Research Database (Denmark)

    Nehring, Volker; Evison, Sophie E F; Santorelli, Lorenzo A

    2011-01-01

    behaviour is thought to be rare in one of the classic examples of cooperation--social insect colonies--because the colony-level costs of individual selfishness select against cues that would allow workers to recognize their closest relatives. In accord with this, previous studies of wasps and ants have...... found little or no kin information in recognition cues. Here, we test the hypothesis that social insects do not have kin-informative recognition cues by investigating the recognition cues and relatedness of workers from four colonies of the ant Acromyrmex octospinosus. Contrary to the theoretical...... prediction, we show that the cuticular hydrocarbons of ant workers in all four colonies are informative enough to allow full-sisters to be distinguished from half-sisters with a high accuracy. These results contradict the hypothesis of non-heritable recognition cues and suggest that there is more potential...

  5. Multiscale Cues Drive Collective Cell Migration

    Science.gov (United States)

    Nam, Ki-Hwan; Kim, Peter; Wood, David K.; Kwon, Sunghoon; Provenzano, Paolo P.; Kim, Deok-Ho

    2016-07-01

    To investigate complex biophysical relationships driving directed cell migration, we developed a biomimetic platform that allows perturbation of microscale geometric constraints with concomitant nanoscale contact guidance architectures. This permits us to elucidate the influence, and parse out the relative contribution, of multiscale features, and define how these physical inputs are jointly processed with oncogenic signaling. We demonstrate that collective cell migration is profoundly enhanced by the addition of contract guidance cues when not otherwise constrained. However, while nanoscale cues promoted migration in all cases, microscale directed migration cues are dominant as the geometric constraint narrows, a behavior that is well explained by stochastic diffusion anisotropy modeling. Further, oncogene activation (i.e. mutant PIK3CA) resulted in profoundly increased migration where extracellular multiscale directed migration cues and intrinsic signaling synergistically conspire to greatly outperform normal cells or any extracellular guidance cues in isolation.

  6. Sound Symbolism in Basic Vocabulary

    Directory of Open Access Journals (Sweden)

    Søren Wichmann

    2010-04-01

    Full Text Available The relationship between meanings of words and their sound shapes is to a large extent arbitrary, but it is well known that languages exhibit sound symbolism effects violating arbitrariness. Evidence for sound symbolism is typically anecdotal, however. Here we present a systematic approach. Using a selection of basic vocabulary in nearly one half of the world’s languages we find commonalities among sound shapes for words referring to same concepts. These are interpreted as due to sound symbolism. Studying the effects of sound symbolism cross-linguistically is of key importance for the understanding of language evolution.

  7. ABOUT SOUNDS IN VIDEO GAMES

    Directory of Open Access Journals (Sweden)

    Denikin Anton A.

    2012-12-01

    Full Text Available The article considers the aesthetical and practical possibilities for sounds (sound design in video games and interactive applications. Outlines the key features of the game sound, such as simulation, representativeness, interactivity, immersion, randomization, and audio-visuality. The author defines the basic terminology in study of game audio, as well as identifies significant aesthetic differences between film sounds and sounds in video game projects. It is an attempt to determine the techniques of art analysis for the approaches in study of video games including aesthetics of their sounds. The article offers a range of research methods, considering the video game scoring as a contemporary creative practice.

  8. Exploring Sound with Insects

    Science.gov (United States)

    Robertson, Laura; Meyer, John R.

    2010-01-01

    Differences in insect morphology and movement during singing provide a fascinating opportunity for students to investigate insects while learning about the characteristics of sound. In the activities described here, students use a free online computer software program to explore the songs of the major singing insects and experiment with making…

  9. Second sound tracking system

    Science.gov (United States)

    Yang, Jihee; Ihas, Gary G.; Ekdahl, Dan

    2017-10-01

    It is common that a physical system resonates at a particular frequency, whose frequency depends on physical parameters which may change in time. Often, one would like to automatically track this signal as the frequency changes, measuring, for example, its amplitude. In scientific research, one would also like to utilize the standard methods, such as lock-in amplifiers, to improve the signal to noise ratio. We present a complete He ii second sound system that uses positive feedback to generate a sinusoidal signal of constant amplitude via automatic gain control. This signal is used to produce temperature/entropy waves (second sound) in superfluid helium-4 (He ii). A lock-in amplifier limits the oscillation to a desirable frequency and demodulates the received sound signal. Using this tracking system, a second sound signal probed turbulent decay in He ii. We present results showing that the tracking system is more reliable than those of a conventional fixed frequency method; there is less correlation with temperature (frequency) fluctuation when the tracking system is used.

  10. See This Sound

    DEFF Research Database (Denmark)

    Kristensen, Thomas Bjørnsten

    2009-01-01

    Anmeldelse af udstillingen See This Sound på Lentos Kunstmuseum Linz, Østrig, som markerer den foreløbige kulmination på et samarbejde mellem Lentos Kunstmuseum og Ludwig Boltzmann Institute Media.Art.Research. Udover den konkrete udstilling er samarbejdet tænkt som en ambitiøs, tværfaglig...

  11. Photoacoustic Sounds from Meteors.

    Energy Technology Data Exchange (ETDEWEB)

    Spalding, Richard E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tencer, John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sweatt, William C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hogan, Roy E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boslough, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Spurny, Pavel [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic)

    2015-03-01

    High-speed photometric observations of meteor fireballs have shown that they often produce high-amplitude light oscillations with frequency components in the kHz range, and in some cases exhibit strong millisecond flares. We built a light source with similar characteristics and illuminated various materials in the laboratory, generating audible sounds. Models suggest that light oscillations and pulses can radiatively heat dielectric materials, which in turn conductively heats the surrounding air on millisecond timescales. The sound waves can be heard if the illuminated material is sufficiently close to the observer’s ears. The mechanism described herein may explain many reports of meteors that appear to be audible while they are concurrently visible in the sky and too far away for sound to have propagated to the observer. This photoacoustic (PA) explanation provides an alternative to electrophonic (EP) sounds hypothesized to arise from electromagnetic coupling of plasma oscillation in the meteor wake to natural antennas in the vicinity of an observer.

  12. Sound of Stockholm

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh

    2013-01-01

    Med sine kun 4 år bag sig er Sound of Stockholm relativt ny i det internationale festival-landskab. Festivalen er efter sigende udsprunget af en større eller mindre frustration over, at den svenske eksperimentelle musikscenes forskellige foreninger og organisationer gik hinanden bedene, og...

  13. Making Sense of Sound

    Science.gov (United States)

    Menon, Deepika; Lankford, Deanna

    2016-01-01

    From the earliest days of their lives, children are exposed to all kinds of sound, from soft, comforting voices to the frightening rumble of thunder. Consequently, children develop their own naïve explanations largely based upon their experiences with phenomena encountered every day. When new information does not support existing conceptions,…

  14. The Sounds of Metal

    DEFF Research Database (Denmark)

    Grund, Cynthia M.

    2015-01-01

    Two, I propose that this framework allows for at least a theoretical distinction between the way in which extreme metal – e.g. black metal, doom metal, funeral doom metal, death metal – relates to its sound as music and the way in which much other music may be conceived of as being constituted...

  15. The Universe of Sound

    CERN Multimedia

    CERN. Geneva

    2013-01-01

    Sound Scultor, Bill Fontana, the second winner of the Prix Ars Electronica Collide@CERN residency award, and his science inspiration partner, CERN cosmologist Subodh Patil, present their work in art and science at the CERN Globe of Science and Innovation on 4 July 2013 at 19:00.

  16. Urban Sound Ecologies

    DEFF Research Database (Denmark)

    Groth, Sanne Krogh; Samson, Kristine

    2013-01-01

    . The article concludes that the ways in which recent sound installations work with urban ecologies vary. While two of the examples blend into the urban environment, the other transfers the concert format and its mode of listening to urban space. Last, and in accordance with recent soundscape research, we point...

  17. Cue-induced craving among inhalant users: Development and preliminary validation of a visual cue paradigm.

    Science.gov (United States)

    Jain, Shobhit; Dhawan, Anju; Kumaran, S Senthil; Pattanayak, Raman Deep; Jain, Raka

    2017-12-01

    Cue-induced craving is known to be associated with a higher risk of relapse, wherein drug-specific cues become conditioned stimuli, eliciting conditioned responses. Cue-reactivity paradigm are important tools to study psychological responses and functional neuroimaging changes. However, till date, there has been no specific study or a validated paradigm for inhalant cue-induced craving research. The study aimed to develop and validate visual cue stimulus for inhalant cue-associated craving. The first step (picture selection) involved screening and careful selection of 30 cue- and 30 neutral-pictures based on their relevance for naturalistic settings. In the second step (time optimization), a random selection of ten cue-pictures each was presented for 4s, 6s, and 8s to seven adolescent male inhalant users, and pre-post craving response was compared using a Visual Analogue Scale(VAS) for each of the picture and time. In the third step (validation), craving response for each of 30 cue- and 30 neutral-pictures were analysed among 20 adolescent inhalant users. Findings revealed a significant difference in before and after craving response for the cue-pictures, but not neutral-pictures. Using ROC-curve, pictures were arranged in order of craving intensity. Finally, 20 best cue- and 20 neutral-pictures were used for the development of a 480s visual cue paradigm. This is the first study to systematically develop an inhalant cue picture paradigm which can be used as a tool to examine cue induced craving in neurobiological studies. Further research, including its further validation in larger study and diverse samples, is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Retro-dimension-cue benefit in visual working memory

    OpenAIRE

    Ye, Chaoxiong; Hu, Zhonghua; Ristaniemi, Tapani; Gendron, Maria; Liu, Qiang

    2016-01-01

    In visual working memory (VWM) tasks, participants? performance can be improved by a retro-object-cue. However, previous studies have not investigated whether participants? performance can also be improved by a retro-dimension-cue. Three experiments investigated this issue. We used a recall task with a retro-dimension-cue in all experiments. In Experiment 1, we found benefits from retro-dimension-cues compared to neutral cues. This retro-dimension-cue benefit is reflected in an increased prob...

  19. Contribution of self-motion perception to acoustic target localization.

    Science.gov (United States)

    Pettorossi, V E; Brosch, M; Panichi, R; Botti, F; Grassi, S; Troiani, D

    2005-05-01

    The findings of this study suggest that acoustic spatial perception during head movement is achieved by the vestibular system, which is responsible for the correct dynamic of acoustic target pursuit. The ability to localize sounds in space during whole-body rotation relies on the auditory localization system, which recognizes the position of sound in a head-related frame, and on the sensory systems, namely the vestibular system, which perceive head and body movement. The aim of this study was to analyse the contribution of head motion cues to the spatial representation of acoustic targets in humans. Healthy subjects standing on a rotating platform in the dark were asked to pursue with a laser pointer an acoustic target which was horizontally rotated while the body was kept stationary or maintained stationary while the whole body was rotated. The contribution of head motion to the spatial acoustic representation could be inferred by comparing the gains and phases of the pursuit in the two experimental conditions when the frequency was varied. During acoustic target rotation there was a reduction in the gain and an increase in the phase lag, while during whole-body rotations the gain tended to increase and the phase remained constant. The different contributions of the vestibular and acoustic systems were confirmed by analysing the acoustic pursuit during asymmetric body rotation. In this particular condition, in which self-motion perception gradually diminished, an increasing delay in target pursuit was observed.

  20. Parameterizing Sound: Design Considerations for an Environmental Sound Database

    Science.gov (United States)

    2015-04-01

    associated with, or produced by, a physical event or human activity and 2) sound sources that are common in the environment. Reproductions or sound...Rogers S. Confrontation naming of environmental sounds. Journal of Clinical and Experimental Neuropsychology . 2000;22(6):830–864. 14 VanDerveer NJ

  1. Cross-modal cueing in audiovisual spatial attention

    DEFF Research Database (Denmark)

    Blurton, Steven Paul; Greenlee, Mark W.; Gondan, Matthias

    2015-01-01

    effects have been reported for endogenous visual cues while exogenous cues seem to be mostly ineffective. In three experiments, we investigated cueing effects on the processing of audiovisual signals. In Experiment 1 we used endogenous cues to investigate their effect on the detection of auditory, visual......, and audiovisual targets presented with onset asynchrony. Consistent cueing effects were found in all target conditions. In Experiment 2 we used exogenous cues and found cueing effects only for visual target detection, but not auditory target detection. In Experiment 3 we used predictive exogenous cues to examine...

  2. Task-irrelevant novel sounds improve attentional performance in children with and without ADHD

    Directory of Open Access Journals (Sweden)

    Jana eTegelbeckers

    2016-01-01

    Full Text Available Task-irrelevant salient stimuli involuntarily capture attention and can lead to distraction from an ongoing task, especially in children with ADHD. However, there has been tentative evidence that the presentation of novel sounds can have beneficial effects on cognitive performance. In the present study, we aimed to investigate the influence of novel sounds compared to no sound and a repeatedly presented standard sound on attentional performance in children and adolescents with and without ADHD. We therefore had 32 patients with ADHD and 32 typically developing children and adolescents (8 to 13 years executed a flanker task in which each trial was preceded either by a repeatedly presented standard sound (33%, an unrepeated novel sound (33% or no auditory stimulation (33%. Task-irrelevant novel sounds facilitated attentional performance similarly in children with and without ADHD, as indicated by reduced omission error rates, reaction times, and reaction time variability without compromising performance accuracy. By contrast, standard sounds, while also reducing omission error rates and reaction times, led to increased commission error rates. Therefore, the beneficial effect of novel sounds exceeds cueing of the target display by potentially increased alerting and/or enhanced behavioral control.

  3. Action experience changes attention to kinematic cues

    Directory of Open Access Journals (Sweden)

    Courtney eFilippi

    2016-02-01

    Full Text Available The current study used remote corneal reflection eye-tracking to examine the relationship between motor experience and action anticipation in 13-month-old infants. To measure online anticipation of actions infants watched videos where the actor’s hand provided kinematic information (in its orientation about the type of object that the actor was going to reach for. The actor’s hand orientation either matched the orientation of a rod (congruent cue or did not match the orientation of the rod (incongruent cue. To examine relations between motor experience and action anticipation, we used a 2 (reach first vs. observe first x 2 (congruent kinematic cue vs. incongruent kinematic cue between-subjects design. We show that 13-month-old infants in the observe first condition spontaneously generate rapid online visual predictions to congruent hand orientation cues and do not visually anticipate when presented incongruent cues. We further demonstrate that the speed that these infants generate predictions to congruent motor cues is correlated with their own ability to pre-shape their hands. Finally, we demonstrate that following reaching experience, infants generate rapid predictions to both congruent and incongruent hand shape cues—suggesting that short-term experience changes attention to kinematics.

  4. Cues of maternal condition influence offspring selfishness.

    Science.gov (United States)

    Wong, Janine W Y; Lucas, Christophe; Kölliker, Mathias

    2014-01-01

    The evolution of parent-offspring communication was mostly studied from the perspective of parents responding to begging signals conveying information about offspring condition. Parents should respond to begging because of the differential fitness returns obtained from their investment in offspring that differ in condition. For analogous reasons, offspring should adjust their behavior to cues/signals of parental condition: parents that differ in condition pay differential costs of care and, hence, should provide different amounts of food. In this study, we experimentally tested in the European earwig (Forficula auricularia) if cues of maternal condition affect offspring behavior in terms of sibling cannibalism. We experimentally manipulated female condition by providing them with different amounts of food, kept nymph condition constant, allowed for nymph exposure to chemical maternal cues over extended time, quantified nymph survival (deaths being due to cannibalism) and extracted and analyzed the females' cuticular hydrocarbons (CHC). Nymph survival was significantly affected by chemical cues of maternal condition, and this effect depended on the timing of breeding. Cues of poor maternal condition enhanced nymph survival in early broods, but reduced nymph survival in late broods, and vice versa for cues of good condition. Furthermore, female condition affected the quantitative composition of their CHC profile which in turn predicted nymph survival patterns. Thus, earwig offspring are sensitive to chemical cues of maternal condition and nymphs from early and late broods show opposite reactions to the same chemical cues. Together with former evidence on maternal sensitivities to condition-dependent nymph chemical cues, our study shows context-dependent reciprocal information exchange about condition between earwig mothers and their offspring, potentially mediated by cuticular hydrocarbons.

  5. Cues of maternal condition influence offspring selfishness.

    Directory of Open Access Journals (Sweden)

    Janine W Y Wong

    Full Text Available The evolution of parent-offspring communication was mostly studied from the perspective of parents responding to begging signals conveying information about offspring condition. Parents should respond to begging because of the differential fitness returns obtained from their investment in offspring that differ in condition. For analogous reasons, offspring should adjust their behavior to cues/signals of parental condition: parents that differ in condition pay differential costs of care and, hence, should provide different amounts of food. In this study, we experimentally tested in the European earwig (Forficula auricularia if cues of maternal condition affect offspring behavior in terms of sibling cannibalism. We experimentally manipulated female condition by providing them with different amounts of food, kept nymph condition constant, allowed for nymph exposure to chemical maternal cues over extended time, quantified nymph survival (deaths being due to cannibalism and extracted and analyzed the females' cuticular hydrocarbons (CHC. Nymph survival was significantly affected by chemical cues of maternal condition, and this effect depended on the timing of breeding. Cues of poor maternal condition enhanced nymph survival in early broods, but reduced nymph survival in late broods, and vice versa for cues of good condition. Furthermore, female condition affected the quantitative composition of their CHC profile which in turn predicted nymph survival patterns. Thus, earwig offspring are sensitive to chemical cues of maternal condition and nymphs from early and late broods show opposite reactions to the same chemical cues. Together with former evidence on maternal sensitivities to condition-dependent nymph chemical cues, our study shows context-dependent reciprocal information exchange about condition between earwig mothers and their offspring, potentially mediated by cuticular hydrocarbons.

  6. Brain response to prosodic boundary cues depends on boundary position

    Directory of Open Access Journals (Sweden)

    Julia eHolzgrefe

    2013-07-01

    Full Text Available Prosodic information is crucial for spoken language comprehension and especially for syntactic parsing, because prosodic cues guide the hearer’s syntactic analysis. The time course and mechanisms of this interplay of prosody and syntax are not yet well understood. In particular, there is an ongoing debate whether local prosodic cues are taken into account automatically or whether they are processed in relation to the global prosodic context in which they appear. The present study explores whether the perception of a prosodic boundary is affected by its position within an utterance. In an event-related potential (ERP study we tested if the brain response evoked by the prosodic boundary differs when the boundary occurs early in a list of three names connected by conjunctions (i.e., after the first name as compared to later in the utterance (i.e., after the second name. A closure positive shift (CPS — marking the processing of a prosodic phrase boundary — was elicited only for stimuli with a late boundary, but not for stimuli with an early boundary. This result is further evidence for an immediate integration of prosodic information into the parsing of an utterance. In addition, it shows that the processing of prosodic boundary cues depends on the previously processed information from the preceding prosodic context.

  7. Brain mechanisms underlying the tracking and localization of dynamic cues

    OpenAIRE

    López Pigozzi, Diego

    2013-01-01

    La correcta localización y seguimiento de las pistas dinámicas que se encuentran en el ambiente es una tarea crucial para el individuo. Comportamientos fundamentales como la caza, el apareamiento o el escape necesitan una correcta identificación de la posición de presas, congéneres y depredadores para su correcta realización. El sistema cerebral encargado de localizar al propio sujeto en el ambiente se sabe que se encuentra en la formación hipocampal después de que diversos estudios hayan dem...

  8. Verbal cues affect detection but not localization responses

    NARCIS (Netherlands)

    Mortier, K.; van Zoest, W.; Meeter, M.; Theeuwes, J.

    2010-01-01

    Many theories assume that preknowledge of an upcoming target helps visual selection. In those theories, a top-down set can alter the salience of the target, such that attention can be deployed to the target more efficiently and responses are faster. Evidence for this account stems from visual search

  9. Product sounds : Fundamentals and application

    NARCIS (Netherlands)

    Ozcan-Vieira, E.

    2008-01-01

    Products are ubiquitous, so are the sounds emitted by products. Product sounds influence our reasoning, emotional state, purchase decisions, preference, and expectations regarding the product and the product's performance. Thus, auditory experience elicited by product sounds may not be just about

  10. Sonic mediations: body, sound, technology

    NARCIS (Netherlands)

    Birdsall, C.; Enns, A.

    2008-01-01

    Sonic Mediations: Body, Sound, Technology is a collection of original essays that represents an invaluable contribution to the burgeoning field of sound studies. While sound is often posited as having a bridging function, as a passive in-between, this volume invites readers to rethink the concept of

  11. System for actively reducing sound

    NARCIS (Netherlands)

    Berkhoff, Arthur P.

    2005-01-01

    A system for actively reducing sound from a primary noise source, such as traffic noise, comprising: a loudspeaker connector for connecting to at least one loudspeaker for generating anti-sound for reducing said noisy sound; a microphone connector for connecting to at least a first microphone placed

  12. Wood for sound.

    Science.gov (United States)

    Wegst, Ulrike G K

    2006-10-01

    The unique mechanical and acoustical properties of wood and its aesthetic appeal still make it the material of choice for musical instruments and the interior of concert halls. Worldwide, several hundred wood species are available for making wind, string, or percussion instruments. Over generations, first by trial and error and more recently by scientific approach, the most appropriate species were found for each instrument and application. Using material property charts on which acoustic properties such as the speed of sound, the characteristic impedance, the sound radiation coefficient, and the loss coefficient are plotted against one another for woods. We analyze and explain why spruce is the preferred choice for soundboards, why tropical species are favored for xylophone bars and woodwind instruments, why violinists still prefer pernambuco over other species as a bow material, and why hornbeam and birch are used in piano actions.

  13. Sounds in context

    DEFF Research Database (Denmark)

    Weed, Ethan

    A sound is never just a sound. It is becoming increasingly clear that auditory processing is best thought of not as a one-way afferent stream, but rather as an ongoing interaction between interior processes and the environment. Even the earliest stages of auditory processing in the nervous system...... time-course of contextual influence on auditory processing in three different paradigms: a simple mismatch negativity paradigm with tones of differing pitch, a multi-feature mismatch negativity paradigm in which tones were embedded in a complex musical context, and a cross-modal paradigm, in which...... auditory processing of emotional speech was modulated by an accompanying visual context. I then discuss these results in terms of their implication for how we conceive of the auditory processing stream....

  14. Sound for Health

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    From astronomy to biomedical sciences: music and sound as tools for scientific investigation Music and science are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge. Science and technology have revolutionised the way artists work, interact, and create. The impact of innovative materials, new communication media, more powerful computers, and faster networks on the creative process is evident: we all can become artists in the digital era. What is less known, is that arts, and music in particular, are having a profound impact the way scientists operate, and think. From the early experiments by Kepler to the modern data sonification applications in medicine – sound and music are playing an increasingly crucial role in supporting science and driving innovation. In this talk. Dr. Domenico Vicinanza will be highlighting the complementarity and the natural synergy between music and science, with specific reference to biomedical sciences. Dr. Vicinanza will take t...

  15. Sound in Ergonomics

    Directory of Open Access Journals (Sweden)

    Jebreil Seraji

    1999-03-01

    Full Text Available The word of “Ergonomics “is composed of two separate parts: “Ergo” and” Nomos” and means the Human Factors Engineering. Indeed, Ergonomics (or human factors is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. It has applied different sciences such as Anatomy and physiology, anthropometry, engineering, psychology, biophysics and biochemistry from different ergonomics purposes. Sound when is referred as noise pollution can affect such balance in human life. The industrial noise caused by factories, traffic jam, media, and modern human activity can affect the health of the society.Here we are aimed at discussing sound from an ergonomic point of view.

  16. Pitch Based Sound Classification

    DEFF Research Database (Denmark)

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  17. Sound source location in cavitating tip vortices

    International Nuclear Information System (INIS)

    Higuchi, H.; Taghavi, R.; Arndt, R.E.A.

    1985-01-01

    Utilizing an array of three hydrophones, individual cavitation bursts in a tip vortex could be located. Theoretically, four hydrophones are necessary. Hence the data from three hydrophones are supplemented with photographic observation of the cavitating tip vortex. The cavitation sound sources are found to be localized to within one base chord length from the hydrofoil tip. This appears to correspond to the region of initial tip vortex roll-up. A more extensive study with a four sensor array is now in progress

  18. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion.

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally "happy") pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers "trade off" cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music-widely recognized for its artistic significance-complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  19. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Directory of Open Access Journals (Sweden)

    Michael Schutz

    2017-11-01

    Full Text Available Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor, a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy” pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015. Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech.

  20. Acoustic Constraints and Musical Consequences: Exploring Composers' Use of Cues for Musical Emotion

    Science.gov (United States)

    Schutz, Michael

    2017-01-01

    Emotional communication in music is based in part on the use of pitch and timing, two cues effective in emotional speech. Corpus analyses of natural speech illustrate that happy utterances tend to be higher and faster than sad. Although manipulations altering melodies show that passages changed to be higher and faster sound happier, corpus analyses of unaltered music paralleling those of natural speech have proven challenging. This partly reflects the importance of modality (i.e., major/minor), a powerful musical cue whose use is decidedly imbalanced in Western music. This imbalance poses challenges for creating musical corpora analogous to existing speech corpora for purposes of analyzing emotion. However, a novel examination of music by Bach and Chopin balanced in modality illustrates that, consistent with predictions from speech, their major key (nominally “happy”) pieces are approximately a major second higher and 29% faster than their minor key pieces (Poon and Schutz, 2015). Although this provides useful evidence for parallels in use of emotional cues between these domains, it raises questions about how composers “trade off” cue differentiation in music, suggesting interesting new potential research directions. This Focused Review places those results in a broader context, highlighting their connections with previous work on the natural use of cues for musical emotion. Together, these observational findings based on unaltered music—widely recognized for its artistic significance—complement previous experimental work systematically manipulating specific parameters. In doing so, they also provide a useful musical counterpart to fruitful studies of the acoustic cues for emotion found in natural speech. PMID:29249997

  1. Airspace: Antarctic Sound Transmission

    OpenAIRE

    Polli, Andrea

    2009-01-01

    This paper investigates how sound transmission can contribute to the public understanding of climate change within the context of the Poles. How have such transmission-based projects developed specifically in the Arctic and Antarctic, and how do these works create alternative pathways in order to help audiences better understand climate change? The author has created the media project Sonic Antarctica from a personal experience of the Antarctic. The work combines soundscape recordings and son...

  2. Design and Calibration Tests of an Active Sound Intensity Probe

    Directory of Open Access Journals (Sweden)

    Thomas Kletschkowski

    2008-01-01

    Full Text Available The paper presents an active sound intensity probe that can be used for sound source localization in standing wave fields. The probe consists of a sound hard tube that is terminated by a loudspeaker and an integrated pair of microphones. The microphones are used to decompose the standing wave field inside the tube into its incident and reflected part. The latter is cancelled by an adaptive controller that calculates proper driving signals for the loudspeaker. If the open end of the actively controlled tube is placed close to a vibrating surface, the radiated sound intensity can be determined by measuring the cross spectral density between the two microphones. A one-dimensional free field can be realized effectively, as first experiments performed on a simplified test bed have shown. Further tests proved that a prototype of the novel sound intensity probe can be calibrated.

  3. An investigation of the roles of geomagnetic and acoustic cues in whale navigation and orientation

    Science.gov (United States)

    Allen, Ann Nichole

    Many species of whales migrate annually between high-latitude feeding grounds and low-latitude breeding grounds. Yet, very little is known about how these animals navigate during these migrations. This thesis takes a first look at the roles of geomagnetic and acoustic cues in humpback whale navigation and orientation, in addition to documenting some effects of human-produced sound on beaked whales. The tracks of satellite-tagged humpback whales migrating from Hawaii to Alaska were found to have systematic deviations from the most direct route to their destination. For each whale, a migration track was modeled using only geomagnetic inclination and intensity as navigation cues. The directions in which the observed and modeled tracks deviated from the direct route were compared and found to match for 7 out of 9 tracks, which suggests that migrating humpback whales may use geomagnetic cues for navigation. Additionally, in all cases the observed tracks followed a more direct route to the destination than the modeled tracks, indicating that the whales are likely using additional navigational cues to improve their routes. There is a significant amount of sound available in the ocean to aid in navigation and orientation of a migrating whale. This research investigates the possibility that humpback whales migrating near-shore listen to sounds of snapping shrimp to detect the presence of obstacles, such as rocky islands. A visual tracking study was used, together with hydrophone recordings near a rocky island, to determine whether the whales initiated an avoidance reaction at distances that varied with the acoustic detection range of the island. No avoidance reaction was found. Propagation modeling of the snapping shrimp sounds suggested that the detection range of the island was beyond the visual limit of the survey, indicating that snapping shrimp sounds may be suited as a long-range indicator of a rocky island. Lastly, this thesis identifies a prolonged avoidance

  4. Binaural Sound Reduces Reaction Time in a Virtual Reality Search Task

    DEFF Research Database (Denmark)

    Høeg, Emil Rosenlund; Gerry, Lynda; Thomsen, Lui Albæk

    2017-01-01

    Salient features in a visual search task can direct attention and increase competency on these tasks. Simple cues, such as color change in a salient feature, called the "pop-out effect" can increase task solving efficiency [6]. Previous work has shown that nonspatial auditory signals temporally...... synched with a pop-out effect can improve reaction time in a visual search task, called the "pip and pop effect" [14]. This paper describes a within-group study on the effect of audiospatial attention in virtual reality given a 360-degree visual search. Three cue conditions were compared (no sound, stereo...

  5. Effects of self-relevant cues and cue valence on autobiographical memory specificity in dysphoria.

    Science.gov (United States)

    Matsumoto, Noboru; Mochizuki, Satoshi

    2017-04-01

    Reduced autobiographical memory specificity (rAMS) is a characteristic memory bias observed in depression. To corroborate the capture hypothesis in the CaRFAX (capture and rumination, functional avoidance, executive capacity and control) model, we investigated the effects of self-relevant cues and cue valence on rAMS using an adapted Autobiographical Memory Test conducted with a nonclinical population. Hierarchical linear modelling indicated that the main effects of depression and self-relevant cues elicited rAMS. Moreover, the three-way interaction among valence, self-relevance, and depression scores was significant. A simple slope test revealed that dysphoric participants experienced rAMS in response to highly self-relevant positive cues and low self-relevant negative cues. These results partially supported the capture hypothesis in nonclinical dysphoria. It is important to consider cue valence in future studies examining the capture hypothesis.

  6. Spatial aspects of sound quality - and by multichannel systems subjective assessment of sound reproduced by stereo

    DEFF Research Database (Denmark)

    Choisel, Sylvain

    the fidelity with which sound reproduction systems can re-create the desired stereo image, a laser pointing technique was developed to accurately collect subjects' responses in a localization task. This method is subsequently applied in an investigation of the effects of loudspeaker directivity...... on the perceived direction of panned sources. The second part of the thesis addresses the identification of auditory attributes which play a role in the perception of sound reproduced by multichannel systems. Short musical excerpts were presented in mono, stereo and several multichannel formats to evoke various...

  7. 46 CFR 7.20 - Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and...

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island Sound and easterly entrance to Long Island Sound, NY. 7.20 Section 7.20... Atlantic Coast § 7.20 Nantucket Sound, Vineyard Sound, Buzzards Bay, Narragansett Bay, MA, Block Island...

  8. Working memory load and the retro-cue effect: A diffusion model account.

    Science.gov (United States)

    Shepherdson, Peter; Oberauer, Klaus; Souza, Alessandra S

    2018-02-01

    Retro-cues (i.e., cues presented between the offset of a memory array and the onset of a probe) have consistently been found to enhance performance in working memory tasks, sometimes ameliorating the deleterious effects of increased memory load. However, the mechanism by which retro-cues exert their influence remains a matter of debate. To inform this debate, we applied a hierarchical diffusion model to data from 4 change detection experiments using single item, location-specific probes (i.e., a local recognition task) with either visual or verbal memory stimuli. Results showed that retro-cues enhanced the quality of information entering the decision process-especially for visual stimuli-and decreased the time spent on nondecisional processes. Further, cues interacted with memory load primarily on nondecision time, decreasing or abolishing load effects. To explain these findings, we propose an account whereby retro-cues act primarily to reduce the time taken to access the relevant representation in memory upon probe presentation, and in addition protect cued representations from visual interference. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  9. Zebra finches can use positional and transitional cues to distinguish vocal element strings.

    Science.gov (United States)

    Chen, Jiani; Ten Cate, Carel

    2015-08-01

    Learning sequences is of great importance to humans and non-human animals. Many motor and mental actions, such as singing in birds and speech processing in humans, rely on sequential learning. At least two mechanisms are considered to be involved in such learning. The chaining theory proposes that learning of sequences relies on memorizing the transitions between adjacent items, while the positional theory suggests that learners encode the items according to their ordinal position in the sequence. Positional learning is assumed to dominate sequential learning. However, human infants exposed to a string of speech sounds can learn transitional (chaining) cues. So far, it is not clear whether birds, an increasingly important model for examining vocal processing, can do this. In this study we use a Go-Nogo design to examine whether zebra finches can use transitional cues to distinguish artificially constructed strings of song elements. Zebra finches were trained with sequences differing in transitional and positional information and next tested with novel strings sharing positional and transitional similarities with the training strings. The results show that they can attend to both transitional and positional cues and that their sequential coding strategies can be biased toward transitional cues depending on the learning context. This article is part of a Special Issue entitled: In Honor of Jerry Hogan. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Dynamic Mechanical and Nanofibrous Topological Combinatory Cues Designed for Periodontal Ligament Engineering.

    Science.gov (United States)

    Kim, Joong-Hyun; Kang, Min Sil; Eltohamy, Mohamed; Kim, Tae-Hyun; Kim, Hae-Won

    2016-01-01

    Complete reconstruction of damaged periodontal pockets, particularly regeneration of periodontal ligament (PDL) has been a significant challenge in dentistry. Tissue engineering approach utilizing PDL stem cells and scaffolding matrices offers great opportunity to this, and applying physical and mechanical cues mimicking native tissue conditions are of special importance. Here we approach to regenerate periodontal tissues by engineering PDL cells supported on a nanofibrous scaffold under a mechanical-stressed condition. PDL stem cells isolated from rats were seeded on an electrospun polycaprolactone/gelatin directionally-oriented nanofiber membrane and dynamic mechanical stress was applied to the cell/nanofiber construct, providing nanotopological and mechanical combined cues. Cells recognized the nanofiber orientation, aligning in parallel, and the mechanical stress increased the cell alignment. Importantly, the cells cultured on the oriented nanofiber combined with the mechanical stress produced significantly stimulated PDL specific markers, including periostin and tenascin with simultaneous down-regulation of osteogenesis, demonstrating the roles of topological and mechanical cues in altering phenotypic change in PDL cells. Tissue compatibility of the tissue-engineered constructs was confirmed in rat subcutaneous sites. Furthermore, in vivo regeneration of PDL and alveolar bone tissues was examined under the rat premaxillary periodontal defect models. The cell/nanofiber constructs engineered under mechanical stress showed sound integration into tissue defects and the regenerated bone volume and area were significantly improved. This study provides an effective tissue engineering approach for periodontal regeneration-culturing PDL stem cells with combinatory cues of oriented nanotopology and dynamic mechanical stretch.

  11. Role of Speaker Cues in Attention Inference

    OpenAIRE

    Jin Joo Lee; Cynthia Breazeal; David DeSteno

    2017-01-01

    Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in at...

  12. The challenge of localizing vehicle backup alarms: Effects of passive and electronic hearing protectors, ambient noise level, and backup alarm spectral content

    Directory of Open Access Journals (Sweden)

    Khaled A Alali

    2011-01-01

    Full Text Available A human factors experiment employed a hemi-anechoic sound field in which listeners were required to localize a vehicular backup alarm warning signal (both a standard and a frequency-augmented alarm in 360-degrees azimuth in pink noise of 60 dBA and 90 dBA. Measures of localization performance included: (1 percentage correct localization, (2 percentage of right--left localization errors, (3 percentage of front-rear localization errors, and (4 localization absolute deviation in degrees from the alarm′s actual location. In summary, the data demonstrated that, with some exceptions, normal hearing listeners′ ability to localize the backup alarm in 360-degrees azimuth did not improve when wearing augmented hearing protectors (including dichotic sound transmission earmuffs, flat attenuation earplugs, and level-dependent earplugs as compared to when wearing conventional passive earmuffs or earplugs of the foam or flanged types. Exceptions were that in the 90 dBA pink noise, the flat attenuation earplug yielded significantly better accuracy than the polyurethane foam earplug and both the dichotic and the custom-made diotic electronic sound transmission earmuffs. However, the flat attenuation earplug showed no benefit over the standard pre-molded earplug, the arc earplug, and the passive earmuff. Confusions of front-rear alarm directions were most significant in the 90 dBA noise condition, wherein two types of triple-flanged earplugs exhibited significantly fewer front-rear confusions than either of the electronic muffs. On all measures, the diotic sound transmission earmuff resulted in the poorest localization of any of the protectors due to the fact that its single-microphone design did not enable interaural cues to be heard. Localization was consistently more degraded in the 90 dBA pink noise as compared with the relatively quiet condition of the 60 dBA pink noise. A frequency-augmented backup alarm, which incorporated 400 Hz and 4000 Hz components

  13. Role of Speaker Cues in Attention Inference

    Directory of Open Access Journals (Sweden)

    Jin Joo Lee

    2017-10-01

    Full Text Available Current state-of-the-art approaches to emotion recognition primarily focus on modeling the nonverbal expressions of the sole individual without reference to contextual elements such as the co-presence of the partner. In this paper, we demonstrate that the accurate inference of listeners’ social-emotional state of attention depends on accounting for the nonverbal behaviors of their storytelling partner, namely their speaker cues. To gain a deeper understanding of the role of speaker cues in attention inference, we conduct investigations into real-world interactions of children (5–6 years old storytelling with their peers. Through in-depth analysis of human–human interaction data, we first identify nonverbal speaker cues (i.e., backchannel-inviting cues and listener responses (i.e., backchannel feedback. We then demonstrate how speaker cues can modify the interpretation of attention-related backchannels as well as serve as a means to regulate the responsiveness of listeners. We discuss the design implications of our findings toward our primary goal of developing attention recognition models for storytelling robots, and we argue that social robots can proactively use speaker cues to form more accurate inferences about the attentive state of their human partners.

  14. Spontaneous Hedonic Reactions to Social Media Cues.

    Science.gov (United States)

    van Koningsbruggen, Guido M; Hartmann, Tilo; Eden, Allison; Veling, Harm

    2017-05-01

    Why is it so difficult to resist the desire to use social media? One possibility is that frequent social media users possess strong and spontaneous hedonic reactions to social media cues, which, in turn, makes it difficult to resist social media temptations. In two studies (total N = 200), we investigated less-frequent and frequent social media users' spontaneous hedonic reactions to social media cues using the Affect Misattribution Procedure-an implicit measure of affective reactions. Results demonstrated that frequent social media users showed more favorable affective reactions in response to social media (vs. control) cues, whereas less-frequent social media users' affective reactions did not differ between social media and control cues (Studies 1 and 2). Moreover, the spontaneous hedonic reactions to social media (vs. control) cues were related to self-reported cravings to use social media and partially accounted for the link between social media use and social media cravings (Study 2). These findings suggest that frequent social media users' spontaneous hedonic reactions in response to social media cues might contribute to their difficulties in resisting desires to use social media.

  15. Mobile phone conversations, listening to music and quiet (electric) cars : are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A. Wee, G.P. van Commandeur, J.J.F. & Hagenzieker, M.P.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  16. Mobile phone conversations, listening to music and quiet (electric) cars : Are traffic sounds important for safe cycling?

    NARCIS (Netherlands)

    Stelling-Konczak, A.; van Wee, G. P.; Commandeur, J. J.F.; Hagenzieker, M.

    2017-01-01

    Listening to music or talking on the phone while cycling as well as the growing number of quiet (electric) cars on the road can make the use of auditory cues challenging for cyclists. The present study examined to what extent and in which traffic situations traffic sounds are important for safe

  17. Active sound reduction system and method

    NARCIS (Netherlands)

    2016-01-01

    The present invention refers to an active sound reduction system and method for attenuation of sound emitted by a primary sound source, especially for attenuation of snoring sounds emitted by a human being. This system comprises a primary sound source, at least one speaker as a secondary sound

  18. Sound exposure during outdoor music festivals

    Directory of Open Access Journals (Sweden)

    Tron V Tronstad

    2016-01-01

    Full Text Available Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival’s duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization’s recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization’s recommendations. The results also show that front-of-house measurements reliably predict participant exposure.

  19. Sound Exposure During Outdoor Music Festivals

    Science.gov (United States)

    Tronstad, Tron V.; Gelderblom, Femke B.

    2016-01-01

    Most countries have guidelines to regulate sound exposure at concerts and music festivals. These guidelines limit the allowed sound pressure levels and the concert/festival's duration. In Norway, where there is such a guideline, it is up to the local authorities to impose the regulations. The need to prevent hearing-loss among festival participants is self-explanatory, but knowledge of the actual dose received by visitors is extremely scarce. This study looks at two Norwegian music festivals where only one was regulated by the Norwegian guideline for concert and music festivals. At each festival the sound exposure of four participants was monitored with noise dose meters. This study compared the exposures experienced at the two festivals, and tested them against the Norwegian guideline and the World Health Organization's recommendations. Sound levels during the concerts were higher at the festival not regulated by any guideline, and levels there exceeded both the national and the Worlds Health Organization's recommendations. The results also show that front-of-house measurements reliably predict participant exposure. PMID:27569410

  20. Broadband sound blocking in phononic crystals with rotationally symmetric inclusions.

    Science.gov (United States)

    Lee, Joong Seok; Yoo, Sungmin; Ahn, Young Kwan; Kim, Yoon Young

    2015-09-01

    This paper investigates the feasibility of broadband sound blocking with rotationally symmetric extensible inclusions introduced in phononic crystals. By varying the size of four equally shaped inclusions gradually, the phononic crystal experiences remarkable changes in its band-stop properties, such as shifting/widening of multiple Bragg bandgaps and evolution to resonance gaps. Necessary extensions of the inclusions to block sound effectively can be determined for given incident frequencies by evaluating power transmission characteristics. By arraying finite dissimilar unit cells, the resulting phononic crystal exhibits broadband sound blocking from combinational effects of multiple Bragg scattering and local resonances even with small-numbered cells.

  1. Great cormorants (Phalacrocorax carbo) can detect auditory cues while diving

    DEFF Research Database (Denmark)

    Hansen, Kirstin Anderson; Maxwell, Alyssa; Siebert, Ursula

    2017-01-01

    In-air hearing in birds has been thoroughly investigated. Sound provides birds with auditory information for species and individual recognition from their complex vocalizations, as well as cues while foraging and for avoiding predators. Some 10% of existing species of birds obtain their food under...... the water surface. Whether some of these birds make use of acoustic cues while underwater is unknown. An interesting species in this respect is the great cormorant (Phalacrocorax carbo), being one of the most effective marine predators and relying on the aquatic environment for food year round. Here, its...... underwater hearing abilities were investigated using psychophysics, where the bird learned to detect the presence or absence of a tone while submerged. The greatest sensitivity was found at 2 kHz, with an underwater hearing threshold of 71 dB re 1 μPa rms. The great cormorant is better at hearing underwater...

  2. Reproduction of nearby sources by imposing true interaural differences on a sound field control approach

    DEFF Research Database (Denmark)

    Badajoz, Javier; Chang, Ji-ho; Agerkvist, Finn T.

    2015-01-01

    In anechoic conditions, the Interaural Level Difference (ILD) is the most significant auditory cue to judge the distance to a sound source located within 1 m of the listener's head. This is due to the unique characteristics of a point source in its near field, which result in exceptionally high...... as Pressure Matching (PM), and a binaural control technique. While PM aims at reproducing the incident sound field, the objective of the binaural control technique is to ensure a correct reproduction of interaural differences. The combination of these two approaches gives rise to the following features: (i......, distance dependent ILDs. When reproducing the sound field of sources located near the head with line or circular arrays of loudspeakers, the reproduced ILDs are generally lower than expected, due to physical limitations. This study presents an approach that combines a sound field reproduction method, known...

  3. Reproduction of nearby sound sources using higher-order ambisonics with practical loudspeaker arrays

    DEFF Research Database (Denmark)

    Favrot, Sylvain Emmanuel; Buchholz, Jörg

    2012-01-01

    the impact of two existing and a new proposed regularization function on the reproduced sound fields and on the main auditory cue for nearby sound sources outside the median plane, i.e, low-frequencies interaural level differences (ILDs). The proposed regularization function led to a better reproduction......In order to reproduce nearby sound sources with distant loudspeakers to a single listener, the near field compensated (NFC) method for higher-order Ambisonics (HOA) has been previously proposed. In practical realization, this method requires the use of regularization functions. This study analyzes...... of point source sound fields compared to existing regularization functions for NFC-HOA. Measurements in realistic playback environments showed that, for very close sources, significant ILDs for frequencies above about 250 Hz can be reproduced with NFC-HOA and the proposed regularization function whereas...

  4. Magnetospheric radio sounding

    International Nuclear Information System (INIS)

    Ondoh, Tadanori; Nakamura, Yoshikatsu; Koseki, Teruo; Watanabe, Sigeaki; Murakami, Toshimitsu

    1977-01-01

    Radio sounding of the plasmapause from a geostationary satellite has been investigated to observe time variations of the plasmapause structure and effects of the plasma convection. In the equatorial plane, the plasmapause is located, on the average, at 4 R sub(E) (R sub(E); Earth radius), and the plasma density drops outwards from 10 2 -10 3 /cm 3 to 1-10/cm 3 in the plasmapause width of about 600 km. Plasmagrams showing a relation between the virtual range and sounding frequencies are computed by ray tracing of LF-VLF waves transmitted from a geostationary satellite, using model distributions of the electron density in the vicinity of the plasmapause. The general features of the plasmagrams are similar to the topside ionograms. The plasmagram has no penetration frequency such as f 0 F 2 , but the virtual range of the plasmagram increases rapidly with frequency above 100 kHz, since the distance between a satellite and wave reflection point increases rapidly with increasing the electron density inside the plasmapause. The plasmapause sounder on a geostationary satellite has been designed by taking account of an average propagation distance of 2 x 2.6 R sub(E) between a satellite (6.6 R sub(E)) and the plasmapause (4.0 R sub(E)), background noise, range resolution, power consumption, and receiver S/N of 10 dB. The 13-bit Barker coded pulses of baud length of 0.5 msec should be transmitted in direction parallel to the orbital plane at frequencies for 10 kHz-2MHz in a pulse interval of 0.5 sec. The transmitter peak power of 70 watts and 700 watts are required respectively in geomagnetically quiet and disturbed (strong nonthermal continuum emissions) conditions for a 400 meter cylindrical dipole of 1.2 cm diameter on the geostationary satellite. This technique will open new area of radio sounding in the magnetosphere. (auth.)

  5. The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds

    Science.gov (United States)

    Młynarski, Wiktor

    2015-01-01

    In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding. PMID:25996373

  6. An Eye Tracking Comparison of External Pointing Cues and Internal Continuous Cues in Learning with Complex Animations

    Science.gov (United States)

    Boucheix, Jean-Michel; Lowe, Richard K.

    2010-01-01

    Two experiments used eye tracking to investigate a novel cueing approach for directing learner attention to low salience, high relevance aspects of a complex animation. In the first experiment, comprehension of a piano mechanism animation containing spreading-colour cues was compared with comprehension obtained with arrow cues or no cues. Eye…

  7. Handbook for sound engineers

    CERN Document Server

    Ballou, Glen

    2015-01-01

    Handbook for Sound Engineers is the most comprehensive reference available for audio engineers, and is a must read for all who work in audio.With contributions from many of the top professionals in the field, including Glen Ballou on interpretation systems, intercoms, assistive listening, and fundamentals and units of measurement, David Miles Huber on MIDI, Bill Whitlock on audio transformers and preamplifiers, Steve Dove on consoles, DAWs, and computers, Pat Brown on fundamentals, gain structures, and test and measurement, Ray Rayburn on virtual systems, digital interfacing, and preamplifiers

  8. Facing Sound - Voicing Art

    DEFF Research Database (Denmark)

    Lønstrup, Ansa

    2013-01-01

    This article is based on examples of contemporary audiovisual art, with a special focus on the Tony Oursler exhibition Face to Face at Aarhus Art Museum ARoS in Denmark in March-July 2012. My investigation involves a combination of qualitative interviews with visitors, observations of the audience´s...... interactions with the exhibition and the artwork in the museum space and short analyses of individual works of art based on reception aesthetics and phenomenology and inspired by newer writings on sound, voice and listening....

  9. JINGLE: THE SOUNDING SYMBOL

    Directory of Open Access Journals (Sweden)

    Bysko Maxim V.

    2013-12-01

    Full Text Available The article considers the role of jingles in the industrial era, from the occurrence of the regular radio broadcasting, sound films and television up of modern video games, audio and video podcasts, online broadcasts, and mobile communications. Jingles are researched from the point of view of the theory of symbols: the forward motion is detected in the process of development of jingles from the social symbols (radio callsigns to the individual signs-images (ringtones. The role of technical progress in the formation of jingles as important cultural audio elements of modern digital civilization.

  10. Developmental Changes in Locating Voice and Sound in Space

    Science.gov (United States)

    Kezuka, Emiko; Amano, Sachiko; Reddy, Vasudevi

    2017-01-01

    We know little about how infants locate voice and sound in a complex multi-modal space. Using a naturalistic laboratory experiment the present study tested 35 infants at 3 ages: 4 months (15 infants), 5 months (12 infants), and 7 months (8 infants). While they were engaged frontally with one experimenter, infants were presented with (a) a second experimenter’s voice and (b) castanet sounds from three different locations (left, right, and behind). There were clear increases with age in the successful localization of sounds from all directions, and a decrease in the number of repetitions required for success. Nonetheless even at 4 months two-thirds of the infants attempted to search for the voice or sound. At all ages localizing sounds from behind was more difficult and was clearly present only at 7 months. Perseverative errors (looking at the last location) were present at all ages and appeared to be task specific (only present in the 7 month-olds for the behind location). Spontaneous attention shifts by the infants between the two experimenters, evident at 7 months, suggest early evidence for infant initiation of triadic attentional engagements. There was no advantage found for voice over castanet sounds in this study. Auditory localization is a complex and contextual process emerging gradually in the first half of the first year. PMID:28979220

  11. The Encoding of Sound Source Elevation in the Human Auditory Cortex.

    Science.gov (United States)

    Trapeau, Régis; Schönwiesner, Marc

    2018-03-28

    Spatial hearing is a crucial capacity of the auditory system. While the encoding of horizontal sound direction has been extensively studied, very little is known about the representation of vertical sound direction in the auditory cortex. Using high-resolution fMRI, we measured voxelwise sound elevation tuning curves in human auditory cortex and show that sound elevation is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. We changed the ear shape of participants (male and female) with silicone molds for several days. This manipulation reduced or abolished the ability to discriminate sound elevation and flattened cortical tuning curves. Tuning curves recovered their original shape as participants adapted to the modified ears and regained elevation perception over time. These findings suggest that the elevation tuning observed in low-level auditory cortex did not arise from the physical features of the stimuli but is contingent on experience with spectral cues and covaries with the change in perception. One explanation for this observation may be that the tuning in low-level auditory cortex underlies the subjective perception of sound elevation. SIGNIFICANCE STATEMENT This study addresses two fundamental questions about the brain representation of sensory stimuli: how the vertical spatial axis of auditory space is represented in the auditory cortex and whether low-level sensory cortex represents physical stimulus features or subjective perceptual attributes. Using high-resolution fMRI, we show that vertical sound direction is represented by broad tuning functions preferring lower elevations as well as secondary narrow tuning functions preferring individual elevation directions. In addition, we demonstrate that the shape of these tuning functions is contingent on experience with spectral cues and covaries with the change in perception, which may indicate that the

  12. Restrictions of frequent frames as cues to categories: the case of Dutch

    NARCIS (Netherlands)

    Erkelens, M.A.; Chan, H.; Kapia, E.; Jacob, H.

    2008-01-01

    Why Dutch 12-month-old infants do not use frequent frames in early categorization Mintz (2003) proposes that very local distributional contexts of words in the input-so-called 'frequent frames'-function as reliable cues for categories corresponding to the adult verb and noun. He shows that

  13. Improving Robustness against Environmental Sounds for Directing Attention of Social Robots

    DEFF Research Database (Denmark)

    Thomsen, Nicolai Bæk; Tan, Zheng-Hua; Lindberg, Børge

    2015-01-01

    This paper presents a multi-modal system for finding out where to direct the attention of a social robot in a dialog scenario, which is robust against environmental sounds (door slamming, phone ringing etc.) and short speech segments. The method is based on combining voice activity detection (VAD......) and sound source localization (SSL) and furthermore apply post-processing to SSL to filter out short sounds. The system is tested against a baseline system in four different real-world experiments, where different sounds are used as interfering sounds. The results are promising and show a clear improvement....

  14. What Does a Cue Do? Comparing Phonological and Semantic Cues for Picture Naming in Aphasia

    Science.gov (United States)

    Meteyard, Lotte; Bose, Arpita

    2018-01-01

    Purpose: Impaired naming is one of the most common symptoms in aphasia, often treated with cued picture naming paradigms. It has been argued that semantic cues facilitate the reliable categorization of the picture, and phonological cues facilitate the retrieval of target phonology. To test these hypotheses, we compared the effectiveness of…

  15. Cue-reactors: individual differences in cue-induced craving after food or smoking abstinence.

    Directory of Open Access Journals (Sweden)

    Stephen V Mahler

    Full Text Available BACKGROUND: Pavlovian conditioning plays a critical role in both drug addiction and binge eating. Recent animal research suggests that certain individuals are highly sensitive to conditioned cues, whether they signal food or drugs. Are certain humans also more reactive to both food and drug cues? METHODS: We examined cue-induced craving for both cigarettes and food, in the same individuals (n = 15 adult smokers. Subjects viewed smoking-related or food-related images after abstaining from either smoking or eating. RESULTS: Certain individuals reported strong cue-induced craving after both smoking and food cues. That is, subjects who reported strong cue-induced craving for cigarettes also rated stronger cue-induced food craving. CONCLUSIONS: In humans, like in nonhumans, there may be a "cue-reactive" phenotype, consisting of individuals who are highly sensitive to conditioned stimuli. This finding extends recent reports from nonhuman studies. Further understanding this subgroup of smokers may allow clinicians to individually tailor therapies for smoking cessation.

  16. Sound Velocity in Soap Foams

    International Nuclear Information System (INIS)

    Wu Gong-Tao; Lü Yong-Jun; Liu Peng-Fei; Li Yi-Ning; Shi Qing-Fan

    2012-01-01

    The velocity of sound in soap foams at high gas volume fractions is experimentally studied by using the time difference method. It is found that the sound velocities increase with increasing bubble diameter, and asymptotically approach to the value in air when the diameter is larger than 12.5 mm. We propose a simple theoretical model for the sound propagation in a disordered foam. In this model, the attenuation of a sound wave due to the scattering of the bubble wall is equivalently described as the effect of an additional length. This simplicity reasonably reproduces the sound velocity in foams and the predicted results are in good agreement with the experiments. Further measurements indicate that the increase of frequency markedly slows down the sound velocity, whereas the latter does not display a strong dependence on the solution concentration

  17. Sounds like Team Spirit

    Science.gov (United States)

    Hoffman, Edward

    2002-01-01

    I recently accompanied my son Dan to one of his guitar lessons. As I sat in a separate room, I focused on the music he was playing and the beautiful, robust sound that comes from a well-played guitar. Later that night, I woke up around 3 am. I tend to have my best thoughts at this hour. The trouble is I usually roll over and fall back asleep. This time I was still awake an hour later, so I got up and jotted some notes down in my study. I was thinking about the pure, honest sound of a well-played instrument. From there my mind wandered into the realm of high-performance teams and successful projects. (I know this sounds weird, but this is the sort of thing I think about at 3 am. Maybe you have your own weird thoughts around that time.) Consider a team in relation to music. It seems to me that a crack team can achieve a beautiful, perfect unity in the same way that a band of brilliant musicians can when they're in harmony with one another. With more than a little satisfaction I have to admit, I started to think about the great work performed for you by the Knowledge Sharing team, including this magazine you are reading. Over the past two years I personally have received some of my greatest pleasures as the APPL Director from the Knowledge Sharing activities - the Masters Forums, NASA Center visits, ASK Magazine. The Knowledge Sharing team expresses such passion for their work, just like great musicians convey their passion in the music they play. In the case of Knowledge Sharing, there are many factors that have made this so enjoyable (and hopefully worthwhile for NASA). Three ingredients come to mind -- ingredients that have produced a signature sound. First, through the crazy, passionate playing of Alex Laufer, Michelle Collins, Denise Lee, and Todd Post, I always know that something startling and original is going to come out of their activities. This team has consistently done things that are unique and innovative. For me, best of all is that they are always

  18. Cue integration vs. exemplar-based reasoning in multi-attribute decisions from memory: A matter of cue representation

    OpenAIRE

    Arndt Broeder; Ben R. Newell; Christine Platzer

    2010-01-01

    Inferences about target variables can be achieved by deliberate integration of probabilistic cues or by retrieving similar cue-patterns (exemplars) from memory. In tasks with cue information presented in on-screen displays, rule-based strategies tend to dominate unless the abstraction of cue-target relations is unfeasible. This dominance has also been demonstrated --- surprisingly --- in experiments that demanded the retrieval of cue values from memory (M. Persson \\& J. Rieskamp, 2009). In th...

  19. Mercury in Long Island Sound sediments

    Science.gov (United States)

    Varekamp, J.C.; Buchholtz ten Brink, Marilyn R.; Mecray, E.I.; Kreulen, B.

    2000-01-01

    Mercury (Hg) concentrations were measured in 394 surface and core samples from Long Island Sound (LIS). The surface sediment Hg concentration data show a wide spread, ranging from 600 ppb Hg in westernmost LIS. Part of the observed range is related to variations in the bottom sedimentary environments, with higher Hg concentrations in the muddy depositional areas of central and western LIS. A strong residual trend of higher Hg values to the west remains when the data are normalized to grain size. Relationships between a tracer for sewage effluents (C. perfringens) and Hg concentrations indicate that between 0-50 % of the Hg is derived from sewage sources for most samples from the western and central basins. A higher percentage of sewage-derived Hg is found in samples from the westernmost section of LIS and in some local spots near urban centers. The remainder of the Hg is carried into the Sound with contaminated sediments from the watersheds and a small fraction enters the Sound as in situ atmospheric deposition. The Hg-depth profiles of several cores have well-defined contamination profiles that extend to pre-industrial background values. These data indicate that the Hg levels in the Sound have increased by a factor of 5-6 over the last few centuries, but Hg levels in LIS sediments have declined in modern times by up to 30 %. The concentrations of C. perfringens increased exponentially in the top core sections which had declining Hg concentrations, suggesting a recent decline in Hg fluxes that are unrelated to sewage effluents. The observed spatial and historical trends show Hg fluxes to LIS from sewage effluents, contaminated sediment input from the Connecticut River, point source inputs of strongly contaminated sediment from the Housatonic River, variations in the abundance of Hg carrier phases such as TOC and Fe, and focusing of sediment-bound Hg in association with westward sediment transport within the Sound.

  20. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  1. The Good, The Bad, and The Distant: Soundscape Cues for Larval Fish.

    Science.gov (United States)

    Piercy, Julius J B; Smith, David J; Codling, Edward A; Hill, Adam J; Simpson, Stephen D

    2016-01-01

    Coral reef noise is an important navigation cue for settling reef fish larvae and can thus potentially affect reef population dynamics. Recent evidence has shown that fish are able to discriminate between the soundscapes of different types of habitat (e.g., mangrove and reef). In this study, we investigated whether discernible acoustic differences were present between sites within the same coral reef system. Differences in sound intensity and transient content were found between sites, but site-dependent temporal variation was also present. We discuss the implications of these findings for settling fish larvae.

  2. Sound therapies for tinnitus management.

    Science.gov (United States)

    Jastreboff, Margaret M

    2007-01-01

    Many people with bothersome (suffering) tinnitus notice that their tinnitus changes in different acoustical surroundings, it is more intrusive in silence and less profound in the sound enriched environments. This observation led to the development of treatment methods for tinnitus utilizing sound. Many of these methods are still under investigation in respect to their specific protocol and effectiveness and only some have been objectively evaluated in clinical trials. This chapter will review therapies for tinnitus using sound stimulation.

  3. Sound [signal] noise

    DEFF Research Database (Denmark)

    Bjørnsten, Thomas

    2012-01-01

    The article discusses the intricate relationship between sound and signification through notions of noise. The emergence of new fields of sonic artistic practices has generated several questions of how to approach sound as aesthetic form and material. During the past decade an increased attention...... has been paid to, for instance, a category such as ‘sound art’ together with an equally strengthened interest in phenomena and concepts that fall outside the accepted aesthetic procedures and constructions of what we traditionally would term as musical sound – a recurring example being ‘noise’....

  4. Musical Sound, Instruments, and Equipment

    Science.gov (United States)

    Photinos, Panos

    2017-12-01

    'Musical Sound, Instruments, and Equipment' offers a basic understanding of sound, musical instruments and music equipment, geared towards a general audience and non-science majors. The book begins with an introduction of the fundamental properties of sound waves, and the perception of the characteristics of sound. The relation between intensity and loudness, and the relation between frequency and pitch are discussed. The basics of propagation of sound waves, and the interaction of sound waves with objects and structures of various sizes are introduced. Standing waves, harmonics and resonance are explained in simple terms, using graphics that provide a visual understanding. The development is focused on musical instruments and acoustics. The construction of musical scales and the frequency relations are reviewed and applied in the description of musical instruments. The frequency spectrum of selected instruments is explored using freely available sound analysis software. Sound amplification and sound recording, including analog and digital approaches, are discussed in two separate chapters. The book concludes with a chapter on acoustics, the physical factors that affect the quality of the music experience, and practical ways to improve the acoustics at home or small recording studios. A brief technical section is provided at the end of each chapter, where the interested reader can find the relevant physics and sample calculations. These quantitative sections can be skipped without affecting the comprehension of the basic material. Questions are provided to test the reader's understanding of the material. Answers are given in the appendix.

  5. Sounding out the logo shot

    OpenAIRE

    Nicolai Jørgensgaard Graakjær

    2013-01-01

    This article focuses on how sound in combination with visuals (i.e. ‘branding by’) may possibly affect the signifying potentials (i.e. ‘branding effect’) of products and corporate brands (i.e. ‘branding of’) during logo shots in television commercials (i.e. ‘branding through’). This particular focus adds both to the understanding of sound in television commercials and to the understanding of sound brands. The article firstly presents a typology of sounds. Secondly, this typology is applied...

  6. Transfer of memory retrieval cues in rats.

    Science.gov (United States)

    Briggs, James F; Fitz, Kelly I; Riccio, David C

    2007-06-01

    Two experiments using rats were conducted to determine whether the retrieval of a memory could be brought under the control of new contextual cues that had not been present at the time of training. In Experiment 1, rats were trained in one context and then exposed to different contextual cues immediately, 60 min, or 120 min after training. When tested in the shifted context, rats that had been exposed shortly after training treated the shifted context as if it were the original context. The control that the previously neutral context had over retrieval disappeared with longer posttraining delays, suggesting the importance of an active memory representation during exposure. Experiment 2 replicated the basic finding and demonstrated that the transfer of retrieval cues was specific to the contextual cues present during exposure. These findings with rats are consistent with findings from infant research (see, e.g., Boller & Rovee-Collier, 1992) that have shown that a neutral context can come to serve as a retrieval cue for an episode experienced elsewhere.

  7. Cueing spatial attention through timing and probability.

    Science.gov (United States)

    Girardi, Giovanna; Antonucci, Gabriella; Nico, Daniele

    2013-01-01

    Even when focused on an effortful task we retain the ability to detect salient environmental information, and even irrelevant visual stimuli can be automatically detected. However, to which extent unattended information affects attentional control is not fully understood. Here we provide evidences of how the brain spontaneously organizes its cognitive resources by shifting attention between a selective-attending and a stimulus-driven modality within a single task. Using a spatial cueing paradigm we investigated the effect of cue-target asynchronies as a function of their probabilities of occurrence (i.e., relative frequency). Results show that this accessory information modulates attentional shifts. A valid spatial cue improved participants' performance as compared to an invalid one only in trials in which target onset was highly predictable because of its more robust occurrence. Conversely, cuing proved ineffective when spatial cue and target were associated according to a less frequent asynchrony. These patterns of response depended on asynchronies' probability and not on their duration. Our findings clearly demonstrate that through a fine decision-making, performed trial-by-trial, the brain utilizes implicit information to decide whether or not voluntarily shifting spatial attention. As if according to a cost-planning strategy, the cognitive effort of shifting attention depending on the cue is performed only when the expected advantages are higher. In a trade-off competition for cognitive resources, voluntary/automatic attending may thus be a more complex process than expected. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Sound Is Sound: Film Sound Techniques and Infrasound Data Array Processing

    Science.gov (United States)

    Perttu, A. B.; Williams, R.; Taisne, B.; Tailpied, D.

    2017-12-01

    A multidisciplinary collaboration between earth scientists and a sound designer/composer was established to explore the possibilities of audification analysis of infrasound array data. Through the process of audification of the infrasound we began to experiment with techniques and processes borrowed from cinema to manipulate the noise content of the signal. The results of this posed the question: "Would the accuracy of infrasound data array processing be enhanced by employing these techniques?". So a new area of research was born from this collaboration and highlights the value of these interactions and the unintended paths that can occur from them. Using a reference event database, infrasound data were processed using these new techniques and the results were compared with existing techniques to asses if there was any improvement to detection capability for the array. With just under one thousand volcanoes, and a high probability of eruption, Southeast Asia offers a unique opportunity to develop and test techniques for regional monitoring of volcanoes with different technologies. While these volcanoes are monitored locally (e.g. seismometer, infrasound, geodetic and geochemistry networks) and remotely (e.g. satellite and infrasound), there are challenges and limitations to the current monitoring capability. Not only is there a high fraction of cloud cover in the region, making plume observation more difficult via satellite, there have been examples of local monitoring networks and telemetry being destroyed early in the eruptive sequence. The success of local infrasound studies to identify explosions at volcanoes, and calculate plume heights from these signals, has led to an interest in retrieving source parameters for the purpose of ash modeling with a regional network independent of cloud cover.

  9. Sounding the Alarm: An Introduction to Ecological Sound Art

    Directory of Open Access Journals (Sweden)

    Jonathan Gilmurray

    2016-12-01

    Full Text Available In recent years, a number of sound artists have begun engaging with ecological issues through their work, forming a growing movement of ˝ecological sound art˝. This paper traces its development, examines its influences, and provides examples of the artists whose work is currently defining this important and timely new field.

  10. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users.

    Science.gov (United States)

    Zheng, Yi; Escabí, Monty; Litovsky, Ruth Y

    2017-08-01

    Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband "ripple" stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects' spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Expectations in Culturally Unfamiliar Music: Influences of Proximal and Distal Cues and Timbral Characteristics

    Directory of Open Access Journals (Sweden)

    Catherine J Stevens

    2013-11-01

    Full Text Available Listeners’ musical perception is influenced by cues that can be stored in short-term memory (e.g. within the same musical piece or long-term memory (e.g. based on one’s own musical culture. The present study tested how these cues (referred to as respectively proximal and distal cues influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan sister instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally shimmering sound. The results showed: 1 out-of-scale endings were judged less complete than original gong and in-scale endings; 2 for melodies played with sister instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g. previously unfamiliar timbres and proximal cues (within the same sequence and over the experimental session on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.

  12. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users

    Science.gov (United States)

    Zheng, Yi; Escabí, Monty; Litovsky, Ruth Y.

    2018-01-01

    Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband “ripple” stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects’ spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance. PMID:28601530

  13. Expectations in culturally unfamiliar music: influences of proximal and distal cues and timbral characteristics.

    Science.gov (United States)

    Stevens, Catherine J; Tardieu, Julien; Dunbar-Hall, Peter; Best, Catherine T; Tillmann, Barbara

    2013-01-01

    Listeners' musical perception is influenced by cues that can be stored in short-term memory (e.g., within the same musical piece) or long-term memory (e.g., based on one's own musical culture). The present study tested how these cues (referred to as, respectively, proximal and distal cues) influence the perception of music from an unfamiliar culture. Western listeners who were naïve to Gamelan music judged completeness and coherence for newly constructed melodies in the Balinese gamelan tradition. In these melodies, we manipulated the final tone with three possibilities: the original gong tone, an in-scale tone replacement or an out-of-scale tone replacement. We also manipulated the musical timbre employed in Gamelan pieces. We hypothesized that novice listeners are sensitive to out-of-scale changes, but not in-scale changes, and that this might be influenced by the more unfamiliar timbre created by Gamelan "sister" instruments whose harmonics beat with the harmonics of the other instrument, creating a timbrally "shimmering" sound. The results showed: (1) out-of-scale endings were judged less complete than original gong and in-scale endings; (2) for melodies played with "sister" instruments, in-scale endings were judged as less complete than original endings. Furthermore, melodies using the original scale tones were judged more coherent than melodies containing few or multiple tone replacements; melodies played on single instruments were judged more coherent than the same melodies played on sister instruments. Additionally, there is some indication of within-session statistical learning, with expectations for the initially-novel materials developing during the course of the experiment. The data suggest the influence of both distal cues (e.g., previously unfamiliar timbres) and proximal cues (within the same sequence and over the experimental session) on the perception of melodies from other cultural systems based on unfamiliar tunings and scale systems.

  14. Counterbalancing in smoking cue research: a critical analysis.

    Science.gov (United States)

    Sayette, Michael A; Griffin, Kasey M; Sayers, W Michael

    2010-11-01

    Cue exposure research has been used to examine key issues in smoking research, such as predicting relapse, testing new medications, investigating the neurobiology of nicotine dependence, and examining reactivity among smokers with comorbid psychopathologies. Determining the order that cues are presented is one of the most critical steps in the design of these investigations. It is widely assumed that cue exposure studies should counterbalance the order in which smoking and control (neutral) cues are presented. This article examines the premises underlying the use of counterbalancing in experimental research, and it evaluates the degree to which counterbalancing is appropriate in smoking cue exposure studies. We reviewed the available literature on the use of counterbalancing techniques in human smoking cue exposure research. Many studies counterbalancing order of cues have not provided critical analyses to determine whether this approach was appropriate. Studies that have reported relevant data, however, suggest that order of cue presentation interacts with type of cue (smoking vs. control), which raises concerns about the utility of counterbalancing. Primarily, this concern arises from potential carryover effects, in which exposure to smoking cues affects subsequent responding to neutral cues. Cue type by order of cue interactions may compromise the utility of counterbalancing. Unfortunately, there is no obvious alternative that is optimal across studies. Strengths and limitations of several alternative designs are considered, and key questions are identified to advance understanding of the optimal conditions for conducting smoking cue exposure studies.

  15. Counterbalancing in Smoking Cue Research: A Critical Analysis

    Science.gov (United States)

    Griffin, Kasey M.; Sayers, W. Michael

    2010-01-01

    Introduction: Cue exposure research has been used to examine key issues in smoking research, such as predicting relapse, testing new medications, investigating the neurobiology of nicotine dependence, and examining reactivity among smokers with comorbid psychopathologies. Determining the order that cues are presented is one of the most critical steps in the design of these investigations. It is widely assumed that cue exposure studies should counterbalance the order in which smoking and control (neutral) cues are presented. This article examines the premises underlying the use of counterbalancing in experimental research, and it evaluates the degree to which counterbalancing is appropriate in smoking cue exposure studies. Methods: We reviewed the available literature on the use of counterbalancing techniques in human smoking cue exposure research. Results: Many studies counterbalancing order of cues have not provided critical analyses to determine whether this approach was appropriate. Studies that have reported relevant data, however, suggest that order of cue presentation interacts with type of cue (smoking vs. control), which raises concerns about the utility of counterbalancing. Primarily, this concern arises from potential carryover effects, in which exposure to smoking cues affects subsequent responding to neutral cues. Conclusions: Cue type by order of cue interactions may compromise the utility of counterbalancing. Unfortunately, there is no obvious alternative that is optimal across studies. Strengths and limitations of several alternative designs are considered, and key questions are identified to advance understanding of the optimal conditions for conducting smoking cue exposure studies. PMID:20884695

  16. Development of Prediction Tool for Sound Absorption and Sound Insulation for Sound Proof Properties

    OpenAIRE

    Yoshio Kurosawa; Takao Yamaguchi

    2015-01-01

    High frequency automotive interior noise above 500 Hz considerably affects automotive passenger comfort. To reduce this noise, sound insulation material is often laminated on body panels or interior trim panels. For a more effective noise reduction, the sound reduction properties of this laminated structure need to be estimated. We have developed a new calculate tool that can roughly calculate the sound absorption and insulation properties of laminate structure and handy ...

  17. Sound, memory and interruption

    DEFF Research Database (Denmark)

    Pinder, David

    2016-01-01

    This chapter considers how art can interrupt the times and spaces of urban development so they might be imagined, experienced and understood differently. It focuses on the construction of the M11 Link Road through north-east London during the 1990s that demolished hundreds of homes and displaced...... around a thousand people. The highway was strongly resisted and it became the site of one of the country’s longest and largest anti-road struggles. The chapter addresses specifically Graeme Miller’s sound walk LINKED (2003), which for more than a decade has been broadcasting memories and stories...... of people who were violently displaced by the road as well as those who actively sought to halt it. Attention is given to the walk’s interruption of senses of the given and inevitable in two main ways. The first is in relation to the pace of the work and its deployment of slowness and arrest in a context...

  18. The sounds of science

    Science.gov (United States)

    Carlowicz, Michael

    As scientists carefully study some aspects of the ocean environment, are they unintentionally distressing others? That is a question to be answered by Robert Benson and his colleagues in the Center for Bioacoustics at Texas A&M University.With help from a 3-year, $316,000 grant from the U.S. Office of Naval Research, Benson will study how underwater noise produced by naval operations and other sources may affect marine mammals. In Benson's study, researchers will generate random sequences of low-frequency, high-intensity (180-decibel) sounds in the Gulf of Mexico, working at an approximate distance of 1 km from sperm whale herds. Using an array of hydrophones, the scientists will listen to the characteristic clicks and whistles of the sperm whales to detect changes in the animals' direction, speed, and depth, as derived from fluctuations in their calls.

  19. Sound of proteins

    DEFF Research Database (Denmark)

    2007-01-01

    In my group we work with Molecular Dynamics to model several different proteins and protein systems. We submit our modelled molecules to changes in temperature, changes in solvent composition and even external pulling forces. To analyze our simulation results we have so far used visual inspection...... and statistical analysis of the resulting molecular trajectories (as everybody else!). However, recently I started assigning a particular sound frequency to each amino acid in the protein, and by setting the amplitude of each frequency according to the movement amplitude we can "hear" whenever two aminoacids...... example of soundfile was obtained from using Steered Molecular Dynamics for stretching the neck region of the scallop myosin molecule (in rigor, PDB-id: 1SR6), in such a way as to cause a rotation of the myosin head. Myosin is the molecule responsible for producing the force during muscle contraction...

  20. Perceptions of Sexual Orientation From Minimal Cues.

    Science.gov (United States)

    Rule, Nicholas O

    2017-01-01

    People derive considerable amounts of information about each other from minimal nonverbal cues. Apart from characteristics typically regarded as obvious when encountering another person (e.g., age, race, and sex), perceivers can identify many other qualities about a person that are typically rather subtle. One such feature is sexual orientation. Here, I review the literature documenting the accurate perception of sexual orientation from nonverbal cues related to one's adornment, acoustics, actions, and appearance. In addition to chronicling studies that have demonstrated how people express and extract sexual orientation in each of these domains, I discuss some of the basic cognitive and perceptual processes that support these judgments, including how cues to sexual orientation manifest in behavioral (e.g., clothing choices) and structural (e.g., facial morphology) signals. Finally, I attend to boundary conditions in the accurate perception of sexual orientation, such as the states, traits, and group memberships that moderate individuals' ability to reliably decipher others' sexual orientation.

  1. Hunger, taste, and normative cues in predictions about food intake.

    Science.gov (United States)

    Vartanian, Lenny R; Reily, Natalie M; Spanos, Samantha; McGuirk, Lucy C; Herman, C Peter; Polivy, Janet

    2017-09-01

    Normative eating cues (portion size, social factors) have a powerful impact on people's food intake, but people often fail to acknowledge the influence of these cues, instead explaining their food intake in terms of internal (hunger) or sensory (taste) cues. This study examined whether the same biases apply when making predictions about how much food a person would eat. Participants (n = 364) read a series of vignettes describing an eating scenario and predicted how much food the target person would eat in each situation. Some scenarios consisted of a single eating cue (hunger, taste, or a normative cue) that would be expected to increase intake (e.g., high hunger) or decrease intake (e.g., a companion who eats very little). Other scenarios combined two cues that were in conflict with one another (e.g., high hunger + a companion who eats very little). In the cue-conflict scenarios involving an inhibitory internal/sensory cue (e.g., low hunger) with an augmenting normative cue (e.g., a companion who eats a lot), participants predicted a low level of food intake, suggesting a bias toward the internal/sensory cue. For scenarios involving an augmenting internal/sensory cue (e.g., high hunger) and an inhibitory normative cue (e.g., a companion who eats very little), participants predicted an intermediate level of food intake, suggesting that they were influenced by both the internal/sensory and normative cue. Overall, predictions about food intake tend to reflect a general bias toward internal/sensory cues, but also include normative cues when those cues are inhibitory. If people are systematically biased toward internal, sensory, and inhibitory cues, then they may underestimate how much food they or other people will eat in many situations, particularly when normative cues promoting eating are present. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Interference from retrieval cues in Parkinson's disease.

    Science.gov (United States)

    Crescentini, Cristiano; Marin, Dario; Del Missier, Fabio; Biasutti, Emanuele; Shallice, Tim

    2011-11-01

    Existing studies on memory interference in Parkinson's disease (PD) patients have provided mixed results and it is unknown whether PD patients have problems in overcoming interference from retrieval cues. We investigated this issue by using a part-list cuing paradigm. In this paradigm, after the study of a list of items, the presentation of some of these items as retrieval cues hinders the recall of the remaining ones. We tested PD patients' (n = 19) and control participants' (n = 16) episodic memory in the presence and absence of part-list cues, using initial-letter probes, and following either weak or strong serial associative encoding of list items. Both PD patients and control participants showed a comparable and significant part-list cuing effect after weak associative encoding (13% vs. 12% decrease in retrieval in part-list cuing vs. no part-list cuing -control- conditions in PD patients and control participants, respectively), denoting a similar effect of cue-driven interference in the two populations when a serial retrieval strategy is hard to develop. However, only PD patients showed a significant part-list cuing effect after strong associative encoding (20% vs. 5% decrease in retrieval in patients and controls, respectively). When encoding promotes the development of an effective serial retrieval strategy, the presentation of part-list cues has a specifically disruptive effect in PD patients. This indicates problems in strategic retrieval, probably related to PD patients' increased tendency to rely on external cues. Findings in control conditions suggest that less effective encoding may have contributed to PD patients' memory performance.

  3. Meninges-derived cues control axon guidance.

    Science.gov (United States)

    Suter, Tracey A C S; DeLoughery, Zachary J; Jaworski, Alexander

    2017-10-01

    The axons of developing neurons travel long distances along stereotyped pathways under the direction of extracellular cues sensed by the axonal growth cone. Guidance cues are either secreted proteins that diffuse freely or bind the extracellular matrix, or membrane-anchored proteins. Different populations of axons express distinct sets of receptors for guidance cues, which results in differential responses to specific ligands. The full repertoire of axon guidance cues and receptors and the identity of the tissues producing these cues remain to be elucidated. The meninges are connective tissue layers enveloping the vertebrate brain and spinal cord that serve to protect the central nervous system (CNS). The meninges also instruct nervous system development by regulating the generation and migration of neural progenitors, but it has not been determined whether they help guide axons to their targets. Here, we investigate a possible role for the meninges in neuronal wiring. Using mouse neural tissue explants, we show that developing spinal cord meninges produce secreted attractive and repulsive cues that can guide multiple types of axons in vitro. We find that motor and sensory neurons, which project axons across the CNS-peripheral nervous system (PNS) boundary, are attracted by meninges. Conversely, axons of both ipsi- and contralaterally projecting dorsal spinal cord interneurons are repelled by meninges. The responses of these axonal populations to the meninges are consistent with their trajectories relative to meninges in vivo, suggesting that meningeal guidance factors contribute to nervous system wiring and control which axons are able to traverse the CNS-PNS boundary. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Interactive Sound Propagation using Precomputation and Statistical Approximations

    Science.gov (United States)

    Antani, Lakulish

    Acoustic phenomena such as early reflections, diffraction, and reverberation have been shown to improve the user experience in interactive virtual environments and video games. These effects arise due to repeated interactions between sound waves and objects in the environment. In interactive applications, these effects must be simulated within a prescribed time budget. We present two complementary approaches for computing such acoustic effects in real time, with plausible variation in the sound field throughout the scene. The first approach, Precomputed Acoustic Radiance Transfer, precomputes a matrix that accounts for multiple acoustic interactions between all scene objects. The matrix is used at run time to provide sound propagation effects that vary smoothly as sources and listeners move. The second approach couples two techniques---Ambient Reverberance, and Aural Proxies---to provide approximate sound propagation effects in real time, based on only the portion of the environment immediately visible to the listener. These approaches lie at different ends of a space of interactive sound propagation techniques for modeling sound propagation effects in interactive applications. The first approach emphasizes accuracy by modeling acoustic interactions between all parts of the scene; the second approach emphasizes efficiency by only taking the local environment of the listener into account. These methods have been used to efficiently generate acoustic walkthroughs of architectural models. They have also been integrated into a modern game engine, and can enable realistic, interactive sound propagation on commodity desktop PCs.

  5. Measurement of sound velocity profiles in fluids for process monitoring

    International Nuclear Information System (INIS)

    Wolf, M; Kühnicke, E; Lenz, M; Bock, M

    2012-01-01

    In ultrasonic measurements, the time of flight to the object interface is often the only information that is analysed. Conventionally it is only possible to determine distances or sound velocities if the other value is known. The current paper deals with a novel method to measure the sound propagation path length and the sound velocity in media with moving scattering particles simultaneously. Since the focal position also depends on sound velocity, it can be used as a second parameter. Via calibration curves it is possible to determine the focal position and sound velocity from the measured time of flight to the focus, which is correlated to the maximum of averaged echo signal amplitude. To move focal position along the acoustic axis, an annular array is used. This allows measuring sound velocity locally resolved without any previous knowledge of the acoustic media and without a reference reflector. In previous publications the functional efficiency of this method was shown for media with constant velocities. In this work the accuracy of these measurements is improved. Furthermore first measurements and simulations are introduced for non-homogeneous media. Therefore an experimental set-up was created to generate a linear temperature gradient, which also causes a gradient of sound velocity.

  6. How male sound pressure level influences phonotaxis in virgin female Jamaican field crickets (Gryllus assimilis

    Directory of Open Access Journals (Sweden)

    Karen Pacheco

    2014-06-01

    Full Text Available Understanding female mate preference is important for determining the strength and direction of sexual trait evolution. The sound pressure level (SPL acoustic signalers use is often an important predictor of mating success because higher sound pressure levels are detectable at greater distances. If females are more attracted to signals produced at higher sound pressure levels, then the potential fitness impacts of signalling at higher sound pressure levels should be elevated beyond what would be expected from detection distance alone. Here we manipulated the sound pressure level of cricket mate attraction signals to determine how female phonotaxis was influenced. We examined female phonotaxis using two common experimental methods: spherical treadmills and open arenas. Both methods showed similar results, with females exhibiting greatest phonotaxis towards loud sound pressure levels relative to the standard signal (69 vs. 60 dB SPL but showing reduced phonotaxis towards very loud sound pressure level signals relative to the standard (77 vs. 60 dB SPL. Reduced female phonotaxis towards supernormal stimuli may signify an acoustic startle response, an absence of other required sensory cues, or perceived increases in predation risk.

  7. Root phonotropism: Early signalling events following sound perception in Arabidopsis roots.

    Science.gov (United States)

    Rodrigo-Moreno, Ana; Bazihizina, Nadia; Azzarello, Elisa; Masi, Elisa; Tran, Daniel; Bouteau, François; Baluska, Frantisek; Mancuso, Stefano

    2017-11-01

    Sound is a fundamental form of energy and it has been suggested that plants can make use of acoustic cues to obtain information regarding their environments and alter and fine-tune their growth and development. Despite an increasing body of evidence indicating that it can influence plant growth and physiology, many questions concerning the effect of sound waves on plant growth and the underlying signalling mechanisms remains unknown. Here we show that in Arabidopsis thaliana, exposure to sound waves (200Hz) for 2 weeks induced positive phonotropism in roots, which grew towards to sound source. We found that sound waves triggered very quickly (within  minutes) an increase in cytosolic Ca 2+ , possibly mediated by an influx through plasma membrane and a release from internal stock. Sound waves likewise elicited rapid reactive oxygen species (ROS) production and K + efflux. Taken together these results suggest that changes in ion fluxes (Ca 2+ and K + ) and an increase in superoxide production are involved in sound perception in plants, as previously established in animals. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Counterconditioning reduces cue-induced craving and actual cue-elicited consumption.

    Science.gov (United States)

    Van Gucht, Dinska; Baeyens, Frank; Vansteenwegen, Debora; Hermans, Dirk; Beckers, Tom

    2010-10-01

    Cue-induced craving is not easily reduced by an extinction or exposure procedure and may constitute an important route toward relapse in addictive behavior after treatment. In the present study, we investigated the effectiveness of counterconditioning as an alternative procedure to reduce cue-induced craving, in a nonclinical population. We found that a cue, initially paired with chocolate consumption, did not cease to elicit craving for chocolate after extinction (repeated presentation of the cue without chocolate consumption), but did so after counterconditioning (repeated pairing of the cue with consumption of a highly disliked liquid, Polysorbate 20). This effect persisted after 1 week. Counterconditioning moreover was more effective than extinction in disrupting reported expectancy to get to eat chocolate, and also appeared to be more effective in reducing actual cue-elicited chocolate consumption. These results suggest that counterconditioning may be more promising than cue exposure for the prevention of relapse in addictive behavior. (PsycINFO Database Record (c) 2010 APA, all rights reserved).

  9. Audibility of individual reflections in a complete sound field, III

    DEFF Research Database (Denmark)

    Bech, Søren

    1996-01-01

    This paper reports on the influence of individual reflections on the auditory localization of a loudspeaker in a small room. The sound field produced by a single loudspeaker positioned in a normal listening room has been simulated using an electroacoustic setup. The setup models the direct sound......-independent absorption coefficients of the room surfaces, and (2) a loudspeaker with directivity according to a standard two-way system and absorption coefficients according to real materials. The results have shown that subjects can distinguish reliably between timbre and localization, that the spectrum level above 2 k...

  10. Designing a Sound Reducing Wall

    Science.gov (United States)

    Erk, Kendra; Lumkes, John; Shambach, Jill; Braile, Larry; Brickler, Anne; Matthys, Anna

    2015-01-01

    Acoustical engineers use their knowledge of sound to design quiet environments (e.g., classrooms and libraries) as well as to design environments that are supposed to be loud (e.g., concert halls and football stadiums). They also design sound barriers, such as the walls along busy roadways that decrease the traffic noise heard by people in…

  11. Thinking The City Through Sound

    DEFF Research Database (Denmark)

    Kreutzfeldt, Jacob

    2011-01-01

    n Acoutic Territories. Sound Culture and Everyday Life Brandon LaBelle sets out to charts an urban topology through sound. Working his way through six acoustic territories: underground, home, sidewalk, street, shopping mall and sky/radio LaBelle investigates tensions and potentials inherent in mo...

  12. The Textile Form of Sound

    DEFF Research Database (Denmark)

    Bendixen, Cecilie

    2010-01-01

    The aim of this article is to shed light on a small part of the research taking place in the textile field. The article describes an ongoing PhD research project on textiles and sound and outlines the project's two main questions: how sound can be shaped by textiles and conversely how textiles can...

  13. Basic semantics of product sounds

    NARCIS (Netherlands)

    Özcan Vieira, E.; Van Egmond, R.

    2012-01-01

    Product experience is a result of sensory and semantic experiences with product properties. In this paper, we focus on the semantic attributes of product sounds and explore the basic components for product sound related semantics using a semantic differential paradigmand factor analysis. With two

  14. Measuring the 'complexity' of sound

    Indian Academy of Sciences (India)

    cate that specialized regions of the brain analyse different types of sounds [1]. Music, ... The left panel of figure 1 shows examples of sound–pressure waveforms from the nat- ... which is shown in the right panels in the spectrographic representation using a 45 Hz .... Plot of SFM(t) vs. time for different environmental sounds.

  15. Effects of cue-exposure treatment on neural cue reactivity in alcohol dependence: a randomized trial.

    Science.gov (United States)

    Vollstädt-Klein, Sabine; Loeber, Sabine; Kirsch, Martina; Bach, Patrick; Richter, Anne; Bühler, Mira; von der Goltz, Christoph; Hermann, Derik; Mann, Karl; Kiefer, Falk

    2011-06-01

    In alcohol-dependent patients, alcohol-associated cues elicit brain activation in mesocorticolimbic networks involved in relapse mechanisms. Cue-exposure based extinction training (CET) has been shown to be efficacious in the treatment of alcoholism; however, it has remained unexplored whether CET mediates its therapeutic effects via changes of activity in mesolimbic networks in response to alcohol cues. In this study, we assessed CET treatment effects on cue-induced responses using functional magnetic resonance imaging (fMRI). In a randomized controlled trial, abstinent alcohol-dependent patients were randomly assigned to a CET group (n = 15) or a control group (n = 15). All patients underwent an extended detoxification treatment comprising medically supervised detoxification, health education, and supportive therapy. The CET patients additionally received nine CET sessions over 3 weeks, exposing the patient to his/her preferred alcoholic beverage. Cue-induced fMRI activation to alcohol cues was measured at pretreatment and posttreatment. Compared with pretreatment, fMRI cue-reactivity reduction was greater in the CET relative to the control group, especially in the anterior cingulate gyrus and the insula, as well as limbic and frontal regions. Before treatment, increased cue-induced fMRI activation was found in limbic and reward-related brain regions and in visual areas. After treatment, the CET group showed less activation than the control group in the left ventral striatum. The study provides first evidence that an exposure-based psychotherapeutic intervention in the treatment of alcoholism impacts on brain areas relevant for addiction memory and attentional focus to alcohol-associated cues and affects mesocorticolimbic reward pathways suggested to be pathophysiologically involved in addiction. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  16. Heart sound segmentation of pediatric auscultations using wavelet analysis.

    Science.gov (United States)

    Castro, Ana; Vinhoza, Tiago T V; Mattos, Sandra S; Coimbra, Miguel T

    2013-01-01

    Auscultation is widely applied in clinical activity, nonetheless sound interpretation is dependent on clinician training and experience. Heart sound features such as spatial loudness, relative amplitude, murmurs, and localization of each component may be indicative of pathology. In this study we propose a segmentation algorithm to extract heart sound components (S1 and S2) based on it's time and frequency characteristics. This algorithm takes advantage of the knowledge of the heart cycle times (systolic and diastolic periods) and of the spectral characteristics of each component, through wavelet analysis. Data collected in a clinical environment, and annotated by a clinician was used to assess algorithm's performance. Heart sound components were correctly identified in 99.5% of the annotated events. S1 and S2 detection rates were 90.9% and 93.3% respectively. The median difference between annotated and detected events was of 33.9 ms.

  17. The effect of looming and receding sounds on the perceived in-depth orientation of depth-ambiguous biological motion figures.

    Directory of Open Access Journals (Sweden)

    Ben Schouten

    Full Text Available BACKGROUND: The focus in the research on biological motion perception traditionally has been restricted to the visual modality. Recent neurophysiological and behavioural evidence, however, supports the idea that actions are not represented merely visually but rather audiovisually. The goal of the present study was to test whether the perceived in-depth orientation of depth-ambiguous point-light walkers (plws is affected by the presentation of looming or receding sounds synchronized with the footsteps. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1 orthographic frontal/back projections of plws were presented either without sound or with sounds of which the intensity level was rising (looming, falling (receding or stationary. Despite instructions to ignore the sounds and to only report the visually perceived in-depth orientation, plws accompanied with looming sounds were more often judged to be facing the viewer whereas plws paired with receding sounds were more often judged to be facing away from the viewer. To test whether the effects observed in Experiment 1 act at a perceptual level rather than at the decisional level, in Experiment 2 observers perceptually compared orthographic plws without sound or paired with either looming or receding sounds to plws without sound but with perspective cues making them objectively either facing towards or facing away from the viewer. Judging whether either an orthographic plw or a plw with looming (receding perspective cues is visually most looming becomes harder (easier when the orthographic plw is paired with looming sounds. CONCLUSIONS/SIGNIFICANCE: The present results suggest that looming and receding sounds alter the judgements of the in-depth orientation of depth-ambiguous point-light walkers. While looming sounds are demonstrated to act at a perceptual level and make plws look more looming, it remains a challenge for future research to clarify at what level in the processing hierarchy receding sounds

  18. The Benefits of Targeted Memory Reactivation for Consolidation in Sleep are Contingent on Memory Accuracy and Direct Cue-Memory Associations.

    Science.gov (United States)

    Cairney, Scott A; Lindsay, Shane; Sobczak, Justyna M; Paller, Ken A; Gaskell, M Gareth

    2016-05-01

    To investigate how the effects of targeted memory reactivation (TMR) are influenced by memory accuracy prior to sleep and the presence or absence of direct cue-memory associations. 30 participants associated each of 50 pictures with an unrelated word and then with a screen location in two separate tasks. During picture-location training, each picture was also presented with a semantically related sound. The sounds were therefore directly associated with the picture locations but indirectly associated with the words. During a subsequent nap, half of the sounds were replayed in slow wave sleep (SWS). The effect of TMR on memory for the picture locations (direct cue-memory associations) and picture-word pairs (indirect cue-memory associations) was then examined. TMR reduced overall memory decay for recall of picture locations. Further analyses revealed a benefit of TMR for picture locations recalled with a low degree of accuracy prior to sleep, but not those recalled with a high degree of accuracy. The benefit of TMR for low accuracy memories was predicted by time spent in SWS. There was no benefit of TMR for memory of the picture-word pairs, irrespective of memory accuracy prior to sleep. TMR provides the greatest benefit to memories recalled with a low degree of accuracy prior to sleep. The memory benefits of TMR may also be contingent on direct cue-memory associations. © 2016 Associated Professional Sleep Societies, LLC.

  19. The Aesthetic Experience of Sound

    DEFF Research Database (Denmark)

    Breinbjerg, Morten

    2005-01-01

    to react on. In an ecological understanding of hearing our detection of audible information affords us ways of responding to our environment. In my paper I will address both these ways of using sound in relation to computer games. Since a game player is responsible for the unfolding of the game, his......The use of sound in (3D) computer games basically falls in two. Sound is used as an element in the design of the set and as a narrative. As set design sound stages the nature of the environment, it brings it to life. As a narrative it brings us information that we can choose to or perhaps need...... exploration of the virtual space laid out before him is pertinent. In this mood of exploration sound is important and heavily contributing to the aesthetic of the experience....

  20. Controlling sound with acoustic metamaterials

    DEFF Research Database (Denmark)

    Cummer, Steven A. ; Christensen, Johan; Alù, Andrea

    2016-01-01

    Acoustic metamaterials can manipulate and control sound waves in ways that are not possible in conventional materials. Metamaterials with zero, or even negative, refractive index for sound offer new possibilities for acoustic imaging and for the control of sound at subwavelength scales....... The combination of transformation acoustics theory and highly anisotropic acoustic metamaterials enables precise control over the deformation of sound fields, which can be used, for example, to hide or cloak objects from incident acoustic energy. Active acoustic metamaterials use external control to create......-scale metamaterial structures and converting laboratory experiments into useful devices. In this Review, we outline the designs and properties of materials with unusual acoustic parameters (for example, negative refractive index), discuss examples of extreme manipulation of sound and, finally, provide an overview...

  1. Three-month-old human infants use vocal cues of body size.

    Science.gov (United States)

    Pietraszewski, David; Wertz, Annie E; Bryant, Gregory A; Wynn, Karen

    2017-06-14

    Differences in vocal fundamental ( F 0 ) and average formant ( F n ) frequencies covary with body size in most terrestrial mammals, such that larger organisms tend to produce lower frequency sounds than smaller organisms, both between species and also across different sex and life-stage morphs within species. Here we examined whether three-month-old human infants are sensitive to the relationship between body size and sound frequencies. Using a violation-of-expectation paradigm, we found that infants looked longer at stimuli inconsistent with the relationship-that is, a smaller organism producing lower frequency sounds, and a larger organism producing higher frequency sounds-than at stimuli that were consistent with it. This effect was stronger for fundamental frequency than it was for average formant frequency. These results suggest that by three months of age, human infants are already sensitive to the biologically relevant covariation between vocalization frequencies and visual cues to body size. This ability may be a consequence of developmental adaptations for building a phenotype capable of identifying and representing an organism's size, sex and life-stage. © 2017 The Author(s).

  2. Intensive treatment with ultrasound visual feedback for speech sound errors in childhood apraxia

    Directory of Open Access Journals (Sweden)

    Jonathan L Preston

    2016-08-01

    Full Text Available Ultrasound imaging is an adjunct to traditional speech therapy that has shown to be beneficial in the remediation of speech sound errors. Ultrasound biofeedback can be utilized during therapy to provide clients additional knowledge about their tongue shapes when attempting to produce sounds that are in error. The additional feedback may assist children with childhood apraxia of speech in stabilizing motor patterns, thereby facilitating more consistent and accurate productions of sounds and syllables. However, due to its specialized nature, ultrasound visual feedback is a technology that is not widely available to clients. Short-term intensive treatment programs are one option that can be utilized to expand access to ultrasound biofeedback. Schema-based motor learning theory suggests that short-term intensive treatment programs (massed practice may assist children in acquiring more accurate motor patterns. In this case series, three participants ages 10-14 diagnosed with childhood apraxia of speech attended 16 hours of speech therapy over a two-week period to address residual speech sound errors. Two participants had distortions on rhotic sounds, while the third participant demonstrated lateralization of sibilant sounds. During therapy, cues were provided to assist participants in obtaining a tongue shape that facilitated a correct production of the erred sound. Additional practice without ultrasound was also included. Results suggested that all participants showed signs of acquisition of sounds in error. Generalization and retention results were mixed. One participant showed generalization and retention of sounds that were treated; one showed generalization but limited retention; and the third showed no evidence of generalization or retention. Individual characteristics that may facilitate generalization are discussed. Short-term intensive treatment programs using ultrasound biofeedback may result in the acquisition of more accurate motor

  3. Dominance dynamics of competition between intrinsic and extrinsic grouping cues.

    Science.gov (United States)

    Luna, Dolores; Villalba-García, Cristina; Montoro, Pedro R; Hinojosa, José A

    2016-10-01

    In the present study we examined the dominance dynamics of perceptual grouping cues. We used a paradigm in which participants selectively attended to perceptual groups based on several grouping cues in different blocks of trials. In each block, single and competing grouping cues were presented under different exposure durations (50, 150 or 350ms). Using this procedure, intrinsic vs. intrinsic cues (i.e. proximity and shape similarity) were compared in Experiment 1; extrinsic vs. extrinsic cues (i.e. common region and connectedness) in Experiment 2; and intrinsic vs. extrinsic cues (i.e. common region and shape similarity) in Experiment 3. The results showed that in Experiment 1, no dominance of any grouping cue was found: shape similarity and proximity grouping cues showed similar reaction times (RTs) and interference effects. In contrast, in Experiments 2 and 3, common region dominated processing: (i) RTs to common region were shorter than those to connectedness (Exp. 2) or shape similarity (Exp. 3); and (ii) when the grouping cues competed, common region interfered with connectedness (Exp. 2) and shape similarity (Exp. 3) more than vice versa. The results showed that the exposure duration of stimuli only affected the connectedness grouping cue. An important result of our experiments indicates that when two grouping cues compete, both the non-attended intrinsic cue in Experiment 1, and the non-dominant extrinsic cue in Experiments 2 and 3, are still perceived and they are not completely lost. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Cue reactivity in virtual reality: the role of context.

    Science.gov (United States)

    Paris, Megan M; Carter, Brian L; Traylor, Amy C; Bordnick, Patrick S; Day, Susan X; Armsworth, Mary W; Cinciripini, Paul M

    2011-07-01

    Cigarette smokers in laboratory experiments readily respond to smoking stimuli with increased craving. An alternative to traditional cue-reactivity methods (e.g., exposure to cigarette photos), virtual reality (VR) has been shown to be a viable cue presentation method to elicit and assess cigarette craving within complex virtual environments. However, it remains poorly understood whether contextual cues from the environment contribute to craving increases in addition to specific cues, like cigarettes. This study examined the role of contextual cues in a VR environment to evoke craving. Smokers were exposed to a virtual convenience store devoid of any specific cigarette cues followed by exposure to the same convenience store with specific cigarette cues added. Smokers reported increased craving following exposure to the virtual convenience store without specific cues, and significantly greater craving following the convenience store with cigarette cues added. However, increased craving recorded after the second convenience store may have been due to the pre-exposure to the first convenience store. This study offers evidence that an environmental context where cigarette cues are normally present (but are not), elicits significant craving in the absence of specific cigarette cues. This finding suggests that VR may have stronger ecological validity over traditional cue reactivity exposure methods by exposing smokers to the full range of cigarette-related environmental stimuli, in addition to specific cigarette cues, that smokers typically experience in their daily lives. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. The attenuation of auditory neglect by implicit cues.

    Science.gov (United States)

    Coleman, A Rand; Williams, J Michael

    2006-09-01

    This study examined implicit semantic and rhyming cues on perception of auditory stimuli among nonaphasic participants who suffered a lesion of the right cerebral hemisphere and auditory neglect of sound perceived by the left ear. Because language represents an elaborate processing of auditory stimuli and the language centers were intact among these patients, it was hypothesized that interactive verbal stimuli presented in a dichotic manner would attenuate neglect. The selected participants were administered an experimental dichotic listening test composed of six types of word pairs: unrelated words, synonyms, antonyms, categorically related words, compound words, and rhyming words. Presentation of word pairs that were semantically related resulted in a dramatic reduction of auditory neglect. Dichotic presentations of rhyming words exacerbated auditory neglect. These findings suggest that the perception of auditory information is strongly affected by the specific content conveyed by the auditory system. Language centers will process a degraded stimulus that contains salient language content. A degraded auditory stimulus is neglected if it is devoid of content that activates the language centers or other cognitive systems. In general, these findings suggest that auditory neglect involves a complex interaction of intact and impaired cerebral processing centers with content that is selectively processed by these centers.

  6. Nonlocal nonlinear coupling of kinetic sound waves

    Directory of Open Access Journals (Sweden)

    O. Lyubchyk

    2014-11-01

    Full Text Available We study three-wave resonant interactions among kinetic-scale oblique sound waves in the low-frequency range below the ion cyclotron frequency. The nonlinear eigenmode equation is derived in the framework of a two-fluid plasma model. Because of dispersive modifications at small wavelengths perpendicular to the background magnetic field, these waves become a decay-type mode. We found two decay channels, one into co-propagating product waves (forward decay, and another into counter-propagating product waves (reverse decay. All wavenumbers in the forward decay are similar and hence this decay is local in wavenumber space. On the contrary, the reverse decay generates waves with wavenumbers that are much larger than in the original pump waves and is therefore intrinsically nonlocal. In general, the reverse decay is significantly faster than the forward one, suggesting a nonlocal spectral transport induced by oblique sound waves. Even with low-amplitude sound waves the nonlinear interaction rate is larger than the collisionless dissipation rate. Possible applications regarding acoustic waves observed in the solar corona, solar wind, and topside ionosphere are briefly discussed.

  7. Probabilistic Cue Combination: Less Is More

    Science.gov (United States)

    Yurovsky, Daniel; Boyer, Ty W.; Smith, Linda B.; Yu, Chen

    2013-01-01

    Learning about the structure of the world requires learning probabilistic relationships: rules in which cues do not predict outcomes with certainty. However, in some cases, the ability to track probabilistic relationships is a handicap, leading adults to perform non-normatively in prediction tasks. For example, in the "dilution effect,"…

  8. Preschoolers Benefit from Visually Salient Speech Cues

    Science.gov (United States)

    Lalonde, Kaylah; Holt, Rachael Frush

    2015-01-01

    Purpose: This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method: Twelve adults and 27 typically developing 3-…

  9. The effect of cue media on recollections

    NARCIS (Netherlands)

    Hoven, van den E.A.W.H.; Eggen, J.H.

    2009-01-01

    External cognition concerns knowledge that is embedded in our everyday lives and environment. One type of knowledge is memories, recollections of events that occurred in the past. So how do we remember them? One way this can be done is through cuing and reconstructing. These cues can be internal, in

  10. Spontaneous hedonic reactions to social media cues

    NARCIS (Netherlands)

    Koningsbruggen, G.M. van; Hartmann, T.; Eden, A.; Veling, H.P.

    2017-01-01

    Why is it so difficult to resist the desire to use social media? One possibility is that frequent social media users possess strong and spontaneous hedonic reactions to social media cues, which, in turn, makes it difficult to resist social media temptations. In two studies (total N = 200), we

  11. Three-dimensional interpretation of TEM soundings

    Science.gov (United States)

    Barsukov, P. O.; Fainberg, E. B.

    2013-07-01

    We describe the approach to the interpretation of electromagnetic (EM) sounding data which iteratively adjusts the three-dimensional (3D) model of the environment by local one-dimensional (1D) transformations and inversions and reconstructs the geometrical skeleton of the model. The final 3D inversion is carried out with the minimal number of the sought parameters. At each step of the interpretation, the model of the medium is corrected according to the geological information. The practical examples of the suggested method are presented.

  12. Localization of a sound source in in a guided medium and reverberating field. Contribution to a study on leak localization in the internal wall of containment of a nuclear reactor in the case of a severe reactor accident; Localisation d`une source acoustique en milieu guide et champ reverberant. Contribution a l`etude sur la localisation de fuite de l`enceinte de confinement d`une centrale nucleaire en situation accidentelle grave

    Energy Technology Data Exchange (ETDEWEB)

    Thomann, F

    1996-11-28

    Basic data necessary for the localization of a leak in the internal wall of the containment are presented by studying the sound generated by gas jets coming out of (leaking fissures) as well as propagation in a guided medium. The results acquired have led us to choose the simple intercorrelation method and the matched filed processing method, both of which are likely to adequately handle our problems. Whereas the intercorrelation method appears to be limited in scope when dealing in the guided medium, the matched field processing is suited to leak localization over a surface of approximately 1000 m{sup 2} (for a total surface of 10 000 m{sup 2}). Preliminary studies on the leak signal and on replicated signals have led us to limit the frequency band to 2600 - 3000 Hz. We have succeeded in locating a leak situated in an ordinary position with a minimum amount of replicated signals and basic data. We have improved on the estimation of Bartlett and MVDE (minimum variance distortion less filter) rendering them even more effective. Afterwards, we considered the severe accident situation and showed that the system can be installed in situ. (author) 88 refs.

  13. Multiple cues for winged morph production in an aphid metacommunity.

    Directory of Open Access Journals (Sweden)

    Mohsen Mehrparvar

    Full Text Available Environmental factors can lead individuals down different developmental pathways giving rise to distinct phenotypes (phenotypic plasticity. The production of winged or unwinged morphs in aphids is an example of two alternative developmental pathways. Dispersal is paramount in aphids that often have a metapopulation structure, where local subpopulations frequently go extinct, such as the specialized aphids on tansy (Tanacetum vulgare. We conducted various experiments to further understand the cues involved in the production of winged dispersal morphs by the two dominant species of the tansy aphid metacommunity, Metopeurum fuscoviride and Macrosiphoniella tanacetaria. We found that the ant-tended M. fuscoviride produced winged individuals predominantly at the beginning of the season while the untended M. tanacetaria produced winged individuals throughout the season. Winged mothers of both species produced winged offspring, although in both species winged offspring were mainly produced by unwinged females. Crowding and the presence of predators, effects already known to influence wing production in other aphid species, increased the percentage of winged offspring in M. tanacetaria, but not in M. fuscoviride. We find there are also other factors (i.e. temporal effects inducing the production of winged offspring for natural aphid populations. Our results show that the responses of each aphid species are due to multiple wing induction cues.

  14. The (unclear effects of invalid retro-cues.

    Directory of Open Access Journals (Sweden)

    Marcel eGressmann

    2016-03-01

    Full Text Available Studies with the retro-cue paradigm have shown that validly cueing objects in visual working memory long after encoding can still benefit performance on subsequent change detection tasks. With regard to the effects of invalid cues, the literature is less clear. Some studies reported costs, others did not. We here revisit two recent studies that made interesting suggestions concerning invalid retro-cues: One study suggested that costs only occur for larger set sizes, and another study suggested that inclusion of invalid retro-cues diminishes the retro-cue benefit. New data from one experiment and a reanalysis of published data are provided to address these conclusions. The new data clearly show costs (and benefits that were independent of set size, and the reanalysis suggests no influence of the inclusion of invalid retro-cues on the retro-cue benefit. Thus, previous interpretations may be taken with some caution at present.

  15. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  16. Fourth sound in relativistic superfluidity theory

    International Nuclear Information System (INIS)

    Vil'chinskij, S.I.; Fomin, P.I.

    1995-01-01

    The Lorentz-covariant equations describing propagation of the fourth sound in the relativistic theory of superfluidity are derived. The expressions for the velocity of the fourth sound are obtained. The character of oscillation in sound is determined

  17. Cue-induced craving in patients with cocaine use disorder predicts cognitive control deficits toward cocaine cues.

    Science.gov (United States)

    DiGirolamo, Gregory J; Smelson, David; Guevremont, Nathan

    2015-08-01

    Cue-induced craving is a clinically important aspect of cocaine addiction influencing ongoing use and sobriety. However, little is known about the relationship between cue-induced craving and cognitive control toward cocaine cues. While studies suggest that cocaine users have an attentional bias toward cocaine cues, the present study extends this research by testing if cocaine use disorder patients (CDPs) can control their eye movements toward cocaine cues and whether their response varied by cue-induced craving intensity. Thirty CDPs underwent a cue exposure procedure to dichotomize them into high and low craving groups followed by a modified antisaccade task in which subjects were asked to control their eye movements toward either a cocaine or neutral drug cue by looking away from the suddenly presented cue. The relationship between breakdowns in cognitive control (as measured by eye errors) and cue-induced craving (changes in self-reported craving following cocaine cue exposure) was investigated. CDPs overall made significantly more errors toward cocaine cues compared to neutral cues, with higher cravers making significantly more errors than lower cravers even though they did not differ significantly in addiction severity, impulsivity, anxiety, or depression levels. Cue-induced craving was the only specific and significant predictor of subsequent errors toward cocaine cues. Cue-induced craving directly and specifically relates to breakdowns of cognitive control toward cocaine cues in CDPs, with higher cravers being more susceptible. Hence, it may be useful identifying high cravers and target treatment toward curbing craving to decrease the likelihood of a subsequent breakdown in control. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. EUVS Sounding Rocket Payload

    Science.gov (United States)

    Stern, Alan S.

    1996-01-01

    During the first half of this year (CY 1996), the EUVS project began preparations of the EUVS payload for the upcoming NASA sounding rocket flight 36.148CL, slated for launch on July 26, 1996 to observe and record a high-resolution (approx. 2 A FWHM) EUV spectrum of the planet Venus. These preparations were designed to improve the spectral resolution and sensitivity performance of the EUVS payload as well as prepare the payload for this upcoming mission. The following is a list of the EUVS project activities that have taken place since the beginning of this CY: (1) Applied a fresh, new SiC optical coating to our existing 2400 groove/mm grating to boost its reflectivity; (2) modified the Ranicon science detector to boost its detective quantum efficiency with the addition of a repeller grid; (3) constructed a new entrance slit plane to achieve 2 A FWHM spectral resolution; (4) prepared and held the Payload Initiation Conference (PIC) with the assigned NASA support team from Wallops Island for the upcoming 36.148CL flight (PIC held on March 8, 1996; see Attachment A); (5) began wavelength calibration activities of EUVS in the laboratory; (6) made arrangements for travel to WSMR to begin integration activities in preparation for the July 1996 launch; (7) paper detailing our previous EUVS Venus mission (NASA flight 36.117CL) published in Icarus (see Attachment B); and (8) continued data analysis of the previous EUVS mission 36.137CL (Spica occultation flight).

  19. When speaker identity is unavoidable: Neural processing of speaker identity cues in natural speech.

    Science.gov (United States)

    Tuninetti, Alba; Chládková, Kateřina; Peter, Varghese; Schiller, Niels O; Escudero, Paola

    2017-11-01

    Speech sound acoustic properties vary largely across speakers and accents. When perceiving speech, adult listeners normally disregard non-linguistic variation caused by speaker or accent differences, in order to comprehend the linguistic message, e.g. to correctly identify a speech sound or a word. Here we tested whether the process of normalizing speaker and accent differences, facilitating the recognition of linguistic information, is found at the level of neural processing, and whether it is modulated by the listeners' native language. In a multi-deviant oddball paradigm, native and nonnative speakers of Dutch were exposed to naturally-produced Dutch vowels varying in speaker, sex, accent, and phoneme identity. Unexpectedly, the analysis of mismatch negativity (MMN) amplitudes elicited by each type of change shows a large degree of early perceptual sensitivity to non-linguistic cues. This finding on perception of naturally-produced stimuli contrasts with previous studies examining the perception of synthetic stimuli wherein adult listeners automatically disregard acoustic cues to speaker identity. The present finding bears relevance to speech normalization theories, suggesting that at an unattended level of processing, listeners are indeed sensitive to changes in fundamental frequency in natural speech tokens. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Sound Clocks and Sonic Relativity

    Science.gov (United States)

    Todd, Scott L.; Menicucci, Nicolas C.

    2017-10-01

    Sound propagation within certain non-relativistic condensed matter models obeys a relativistic wave equation despite such systems admitting entirely non-relativistic descriptions. A natural question that arises upon consideration of this is, "do devices exist that will experience the relativity in these systems?" We describe a thought experiment in which `acoustic observers' possess devices called sound clocks that can be connected to form chains. Careful investigation shows that appropriately constructed chains of stationary and moving sound clocks are perceived by observers on the other chain as undergoing the relativistic phenomena of length contraction and time dilation by the Lorentz factor, γ , with c the speed of sound. Sound clocks within moving chains actually tick less frequently than stationary ones and must be separated by a shorter distance than when stationary to satisfy simultaneity conditions. Stationary sound clocks appear to be length contracted and time dilated to moving observers due to their misunderstanding of their own state of motion with respect to the laboratory. Observers restricted to using sound clocks describe a universe kinematically consistent with the theory of special relativity, despite the preferred frame of their universe in the laboratory. Such devices show promise in further probing analogue relativity models, for example in investigating phenomena that require careful consideration of the proper time elapsed for observers.

  1. Craving by imagery cue reactivity in opiate dependence following detoxification

    OpenAIRE

    Behera, Debakanta; Goswami, Utpal; Khastgir, Udayan; Kumar, Satindra

    2003-01-01

    Background: Frequent relapses in opioid addiction may be a result of abstinentemergent craving. Exposure to various stimuli associated with drug use (drug cues) may trigger craving as a conditioned response to ?drug cues?. Aims: The present study explored the effects of imagery cue exposure on psychophysiological mechanisms of craving, viz. autonomic arousal, in detoxified opiate addicts. Methodology: Opiate dependent subjects (N=38) following detoxification underwent imagery cue reactivity t...

  2. Spectral information as an orientation cue in dung beetles

    OpenAIRE

    el Jundi, Basil; Foster, James J.; Byrne, Marcus J.; Baird, Emily; Dacke, Marie

    2015-01-01

    During the day, a non-uniform distribution of long and short wavelength light generates a colour gradient across the sky. This gradient could be used as a compass cue, particularly by animals such as dung beetles that rely primarily on celestial cues for orientation. Here, we tested if dung beetles can use spectral cues for orientation by presenting them with monochromatic (green and UV) light spots in an indoor arena. Beetles kept their original bearing when presented with a single light cue...

  3. The case for infrasound as the long-range map cue in avian navigation

    Science.gov (United States)

    Hagstrum, J.T.

    2007-01-01

    Of the various 'map' and 'compass' components of Kramer's avian navigational model, the long-range map component is the least well understood. In this paper atmospheric infrasounds are proposed as the elusive longrange cues constituting the avian navigational map. Although infrasounds were considered a viable candidate for the avian map in the 1970s, and pigeons in the laboratory were found to detect sounds at surprisingly low frequencies (0.05 Hz), other tests appeared to support either of the currently favored olfactory or magnetic maps. Neither of these hypotheses, however, is able to explain the full set of observations, and the field has been at an impasse for several decades. To begin, brief descriptions of infrasonic waves and their passage through the atmosphere are given, followed by accounts of previously unexplained release results. These examples include 'release-site biases' which are deviations of departing pigeons from the homeward bearing, an annual variation in homing performance observed only in Europe, difficulties orienting over lakes and above temperature inversions, and the mysterious disruption of several pigeon races. All of these irregularities can be consistently explained by the deflection or masking of infrasonic cues by atmospheric conditions or by other infrasonic sources (microbaroms, sonic booms), respectively. A source of continuous geographic infrasound generated by atmosphere-coupled microseisms is also proposed. In conclusion, several suggestions are made toward resolving some of the conflicting experimental data with the pigeons' possible use of infrasonic cues.

  4. Fourth sound of holographic superfluids

    International Nuclear Information System (INIS)

    Yarom, Amos

    2009-01-01

    We compute fourth sound for superfluids dual to a charged scalar and a gauge field in an AdS 4 background. For holographic superfluids with condensates that have a large scaling dimension (greater than approximately two), we find that fourth sound approaches first sound at low temperatures. For condensates that a have a small scaling dimension it exhibits non-conformal behavior at low temperatures which may be tied to the non-conformal behavior of the order parameter of the superfluid. We show that by introducing an appropriate scalar potential, conformal invariance can be enforced at low temperatures.

  5. Sound intensity as a function of sound insulation partition

    OpenAIRE

    Cvetkovic , S.; Prascevic , R.

    1994-01-01

    In the modern engineering practice, the sound insulation of the partitions is the synthesis of the theory and of the experience acquired in the procedure of the field and of the laboratory measurement. The science and research public treat the sound insulation in the context of the emission and propagation of the acoustic energy in the media with the different acoustics impedance. In this paper, starting from the essence of physical concept of the intensity as the energy vector, the authors g...

  6. Stimulus-driven attentional capture by subliminal onset cues

    NARCIS (Netherlands)

    Schoeberl, T.; Fuchs, I.; Theeuwes, J.; Ansorge, U.

    2015-01-01

    In two experiments, we tested whether subliminal abrupt onset cues capture attention in a stimulus-driven way. An onset cue was presented 16 ms prior to the stimulus display that consisted of clearly visible color targets. The onset cue was presented either at the same side as the target (the valid

  7. Perceptual and Conceptual Priming of Cue Encoding in Task Switching

    Science.gov (United States)

    Schneider, Darryl W.

    2016-01-01

    Transition effects in task-cuing experiments can be partitioned into task switching and cue repetition effects by using multiple cues per task. In the present study, the author shows that cue repetition effects can be partitioned into perceptual and conceptual priming effects. In 2 experiments, letters or numbers in their uppercase/lowercase or…

  8. Extinction and renewal of cue-elicited reward-seeking.

    Science.gov (United States)

    Bezzina, Louise; Lee, Jessica C; Lovibond, Peter F; Colagiuri, Ben

    2016-12-01

    Reward cues can contribute to overconsumption of food and drugs and can relapse. The failure of exposure therapies to reduce overconsumption and relapse is generally attributed to the context-specificity of extinction. However, no previous study has examined whether cue-elicited reward-seeking (as opposed to cue-reactivity) is sensitive to context renewal. We tested this possibility in 160 healthy volunteers using a Pavlovian-instrumental transfer (PIT) design involving voluntary responding for a high value natural reward (chocolate). One reward cue underwent Pavlovian extinction in the same (Group AAA) or different context (Group ABA) to all other phases. This cue was compared with a second non-extinguished reward cue and an unpaired control cue. There was a significant overall PIT effect with both reward cues eliciting reward-seeking on test relative to the unpaired cue. Pavlovian extinction substantially reduced this effect, with the extinguished reward cue eliciting less reward-seeking than the non-extinguished reward cue. Most interestingly, extinction of cue-elicited reward-seeking was sensitive to renewal, with extinction less effective for reducing PIT when conducted in a different context. These findings have important implications for extinction-based interventions for reducing maladaptive reward-seeking in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Cueing Complex Animations: Does Direction of Attention Foster Learning Processes?

    Science.gov (United States)

    Lowe, Richard; Boucheix, Jean-Michel

    2011-01-01

    The time course of learners' processing of a complex animation was studied using a dynamic diagram of a piano mechanism. Over successive repetitions of the material, two forms of cueing (standard colour cueing and anti-cueing) were administered either before or during the animated segment of the presentation. An uncued group and two other control…

  10. An Integrated Approach to Motion and Sound

    National Research Council Canada - National Science Library

    Hahn, James K; Geigel, Joe; Lee, Jong W; Gritz, Larry; Takala, Tapio; Mishra, Suneil

    1995-01-01

    Until recently, sound has been given little attention in computer graphics and related domains of computer animation and virtual environments, although sounds which are properly synchronized to motion...

  11. Sound Spectrum Influences Auditory Distance Perception of Sound Sources Located in a Room Environment

    Directory of Open Access Journals (Sweden)

    Ignacio Spiousas

    2017-06-01

    Full Text Available Previous studies on the effect of spectral content on auditory distance perception (ADP focused on the physically measurable cues occurring either in the near field (low-pass filtering due to head diffraction or when the sound travels distances >15 m (high-frequency energy losses due to air absorption. Here, we study how the spectrum of a sound arriving from a source located in a reverberant room at intermediate distances (1–6 m influences the perception of the distance to the source. First, we conducted an ADP experiment using pure tones (the simplest possible spectrum of frequencies 0.5, 1, 2, and 4 kHz. Then, we performed a second ADP experiment with stimuli consisting of continuous broadband and bandpass-filtered (with center frequencies of 0.5, 1.5, and 4 kHz and bandwidths of 1/12, 1/3, and 1.5 octave pink-noise clips. Our results showed an effect of the stimulus frequency on the perceived distance both for pure tones and filtered noise bands: ADP was less accurate for stimuli containing energy only in the low-frequency range. Analysis of the frequency response of the room showed that the low accuracy observed for low-frequency stimuli can be explained by the presence of sparse modal resonances in the low-frequency region of the spectrum, which induced a non-monotonic relationship between binaural intensity and source distance. The results obtained in the second experiment suggest that ADP can also be affected by stimulus bandwidth but in a less straightforward way (i.e., depending on the center frequency, increasing stimulus bandwidth could have different effects. Finally, the analysis of the acoustical cues suggests that listeners judged source distance using mainly changes in the overall intensity of the auditory stimulus with distance rather than the direct-to-reverberant energy ratio, even for low-frequency noise bands (which typically induce high amount of reverberation. The results obtained in this study show that, depending on

  12. The acoustic and perceptual cues affecting melody segregation for listeners with a cochlear implant.

    Directory of Open Access Journals (Sweden)

    Jeremy eMarozeau

    2013-11-01

    Full Text Available Our ability to listen selectively to single sound sources in complex auditory environments is termed ‘auditory stream segregation.’ This ability is affected by peripheral disorders such as hearing loss, as well as plasticity in central processing such as occurs with musical training. Brain plasticity induced by musical training can enhance the ability to segregate sound, leading to improvements in a variety of auditory abilities. The melody segregation ability of 12 cochlear-implant recipients was tested using a new method to determine the perceptual distance needed to segregate a simple 4-note melody from a background of interleaved random-pitch distractor notes. In experiment 1, participants rated the difficulty of segregating the melody from distracter notes. Four physical properties of the distracter notes were changed. In experiment 2, listeners were asked to rate the dissimilarity between melody patterns whose notes differed on the four physical properties simultaneously. Multidimensional scaling analysis transformed the dissimilarity ratings into perceptual distances. Regression between physical and perceptual cues then derived the minimal perceptual distance needed to segregate the melody.The most efficient streaming cue for CI users was loudness. For the normal hearing listeners without musical backgrounds, a greater difference on the perceptual dimension correlated to the temporal envelope is needed for stream segregation in CI users. No differences in streaming efficiency were found between the perceptual dimensions linked to the F0 and the spectral envelope.Combined with our previous results in normally-hearing musicians and non-musicians, the results show that differences in training as well as differences in peripheral auditory processing (hearing impairment and the use of a hearing device influences the way that listeners use different acoustic cues for segregating interleaved musical streams.

  13. Effect of three cueing devices for people with Parkinson's disease with gait initiation difficulties.

    Science.gov (United States)

    McCandless, Paula J; Evans, Brenda J; Janssen, Jessie; Selfe, James; Churchill, Andrew; Richards, Jim

    2016-02-01

    Freezing of gait (FOG) remains one of the most common debilitating aspects of Parkinson's disease and has been linked to injuries, falls and reduced quality of life. Although commercially available portable cueing devices exist claiming to assist with overcoming freezing; their immediate effectiveness in overcoming gait initiation failure is currently unknown. This study investigated the effects of three different types of cueing device in people with Parkinson's disease who experience freezing. Twenty participants with idiopathic Parkinson's disease who experienced freezing during gait but who were able to walk short distances indoors independently were recruited. At least three attempts at gait initiation were recorded using a 10 camera Qualisys motion analysis system and four force platforms. Test conditions were; Laser Cane, sound metronome, vibrating metronome, walking stick and no intervention. During testing 12 of the 20 participants had freezing episodes, from these participants 100 freezing and 91 non-freezing trials were recorded. Clear differences in the movement patterns were seen between freezing and non-freezing episodes. The Laser Cane was most effective cueing device at improving the forwards/backwards and side to side movement and had the least number of freezing episodes. The walking stick also showed significant improvements compared to the other conditions. The vibration metronome appeared to disrupt movement compared to the sound metronome at the same beat frequency. This study identified differences in the movement patterns between freezing episodes and non-freezing episodes, and identified immediate improvements during gait initiation when using the Laser Cane over the other interventions. Copyright © 2015. Published by Elsevier B.V.

  14. The science of sound recording

    CERN Document Server

    Kadis, Jay

    2012-01-01

    The Science of Sound Recording will provide you with more than just an introduction to sound and recording, it will allow you to dive right into some of the technical areas that often appear overwhelming to anyone without an electrical engineering or physics background.  The Science of Sound Recording helps you build a basic foundation of scientific principles, explaining how recording really works. Packed with valuable must know information, illustrations and examples of 'worked through' equations this book introduces the theory behind sound recording practices in a logical and prac

  15. Introspective responses to cues and motivation to reduce cigarette smoking influence state and behavioral responses to cue exposure.

    Science.gov (United States)

    Veilleux, Jennifer C; Skinner, Kayla D

    2016-09-01

    In the current study, we aimed to extend smoking cue-reactivity research by evaluating delay discounting as an outcome of cigarette cue exposure. We also separated introspection in response to cues (e.g., self-reporting craving and affect) from cue exposure alone, to determine if introspection changes behavioral responses to cigarette cues. Finally, we included measures of quit motivation and resistance to smoking to assess motivational influences on cue exposure. Smokers were invited to participate in an online cue-reactivity study. Participants were randomly assigned to view smoking images or neutral images, and were randomized to respond to cues with either craving and affect questions (e.g., introspection) or filler questions. Following cue exposure, participants completed a delay discounting task and then reported state affect, craving, and resistance to smoking, as well as an assessment of quit motivation. We found that after controlling for trait impulsivity, participants who introspected on craving and affect showed higher delay discounting, irrespective of cue type, but we found no effect of response condition on subsequent craving (e.g., craving reactivity). We also found that motivation to quit interacted with experimental conditions to predict state craving and state resistance to smoking. Although asking about craving during cue exposure did not increase later craving, it resulted in greater delaying of discounted rewards. Overall, our findings suggest the need to further assess the implications of introspection and motivation on behavioral outcomes of cue exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Retrieval-induced forgetting and interference between cues: training a cue-outcome association attenuates retrieval by alternative cues.

    Science.gov (United States)

    Ortega-Castro, Nerea; Vadillo, Miguel A

    2013-03-01

    Some researchers have attempted to determine whether situations in which a single cue is paired with several outcomes (A-B, A-C interference or interference between outcomes) involve the same learning and retrieval mechanisms as situations in which several cues are paired with a single outcome (A-B, C-B interference or interference between cues). Interestingly, current research on a related effect, which is known as retrieval-induced forgetting, can illuminate this debate. Most retrieval-induced forgetting experiments are based on an experimental design that closely resembles the A-B, A-C interference paradigm. In the present experiment, we found that a similar effect may be observed when items are rearranged such that the general structure of the task more closely resembles the A-B, C-B interference paradigm. This result suggests that, as claimed by other researchers in the area of contingency learning, the two types of interference, namely A-B, A-C and A-B, C-B interference, may share some basic mechanisms. Moreover, the type of inhibitory processes assumed to underlie retrieval-induced forgetting may also play a role in these phenomena. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Gefinex 400S (Sampo) EM-Soundings at Olkiluoto 2007

    International Nuclear Information System (INIS)

    Jokinen, T.; Lehtimaeki, J.

    2007-09-01

    In the beginning of June 2007 Geological Survey of Finland carried out electromagnetic frequency soundings with Gefinex 400S equipment (Sampo) at Onkalo situated in Olkiluoto nuclear power plant area. The same soundings sites were the first time measured and marked in 2004 and repeated after it yearly. The aim of the measurements is to monitor the changes of groundwater conditions by the changes of the electric conductivity of the earth at ONKALO and repository area. The measurements form two 1400 m long broadside profiles, which have 200 m mutual distance and 200 m station separation. The profiles have been measured using 200, 500, and 800 m coil separations. The total number of the soundings stations is 48. In 2007 at 8 sounding stations the transmitter and/or receiver sites were changed and the line L11.400 was substituted by line L11.500. Some changes helped but anyway there were 6 stations that could not be measured because of the strong electromagnetic noise. The numerous power lines and the cables of the area generate local 3-D effects on the sounding curves, but the repeatability of the results is good. However, most suitable for monitoring purposes are the sites without strong 3-D effects. Comparison of results 2004-2007 shows small differences at some sounding sites. (orig.)

  18. Boundary stabilization of memory-type thermoelasticity with second sound

    Science.gov (United States)

    Mustafa, Muhammad I.

    2012-08-01

    In this paper, we consider an n-dimensional thermoelastic system of second sound with a viscoelastic damping localized on a part of the boundary. We establish an explicit and general decay rate result that allows a wider class of relaxation functions and generalizes previous results existing in the literature.

  19. Visualization of Broadband Sound Sources

    OpenAIRE

    Sukhanov Dmitry; Erzakova Nadezhda

    2016-01-01

    In this paper the method of imaging of wideband audio sources based on the 2D microphone array measurements of the sound field at the same time in all the microphones is proposed. Designed microphone array consists of 160 microphones allowing to digitize signals with a frequency of 7200 Hz. Measured signals are processed using the special algorithm that makes it possible to obtain a flat image of wideband sound sources. It is shown experimentally that the visualization is not dependent on the...

  20. Awareness in contextual cueing of visual search as measured with concurrent access- and phenomenal-consciousness tasks.

    Science.gov (United States)

    Schlagbauer, Bernhard; Müller, Hermann J; Zehetleitner, Michael; Geyer, Thomas

    2012-10-25

    In visual search, context information can serve as a cue to guide attention to the target location. When observers repeatedly encounter displays with identical target-distractor arrangements, reaction times (RTs) are faster for repeated relative to nonrepeated displays, the latter containing novel configurations. This effect has been termed "contextual cueing." The present study asked whether information about the target location in repeated displays is "explicit" (or "conscious") in nature. To examine this issue, observers performed a test session (after an initial training phase in which RTs to repeated and nonrepeated displays were measured) in which the search stimuli were presented briefly and terminated by visual masks; following this, observers had to make a target localization response (with accuracy as the dependent measure) and indicate their visual experience and confidence associated with the localization response. The data were examined at the level of individual displays, i.e., in terms of whether or not a repeated display actually produced contextual cueing. The results were that (a) contextual cueing was driven by only a very small number of about four actually learned configurations; (b) localization accuracy was increased for learned relative to nonrepeated displays; and (c) both consciousness measures were enhanced for learned compared to nonrepeated displays. It is concluded that contextual cueing is driven by only a few repeated displays and the ability to locate the target in these displays is associated with increased visual experience.